the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assimilating Geostationary Satellite Visible Reflectance Data: developing and testing the GSI-EnKF-CRTM-Vis technique
Abstract. Satellite visible reflectance observations in cloud- and precipitation-affected regions contain substantial information on weather systems, while data assimilation (DA) of visible data is still challenging due to the complexity of forward operators and the non-Gaussian distribution of cloud variables. This study developed an interface within the framework of the popular Gridpoint Statistical Interpolation (GSI) system to assimilate synthetic visible reflectance simulated by Community Radiative Transfer Model (CRTM). The interface employed a spatial interpolation to ensure accurate alignment between model grids and satellite data, and also facilitating a bidirectional mapping between the state variable space and the observation space. The key implementations within the newly developed GSI-EnKF-CRTM-Vis DA technique include integrating a new observation type from geostationary visible imager, incorporating the module for simulating visible reflectance in CRTM, and extending cloud-related control variables. We employed an ensemble-based DA framework in which ensemble members were initialized with multiple physical parameterization schemes, thereby better representing the ensemble spread arising from cloud parameterization differences. The performance of the GSI-EnKF-CRTM-Vis, configured with the Ensemble Square Root Filter (ENSRF) algorithm, was evaluated by assimilating the Himawari-8 Advanced Himawari Imager (AHI) 0.64 μm visible reflectance for a heavy rainfall event over East Asia on 21 September 2024 under the framework of Observing System Simulation Experiment (OSSE). The experimental results demonstrated that DA of visible reflectance effectively corrected the overestimated cloud water path (CWP), reducing the mean absolute error by 1.5 % on average with forecast improvements lasting 6 hours. Probability density function analysis confirmed significant correction of thin clouds (with reflectance less than 0.2 and CWP less than 0.1 kg·m⁻²). DA of visible reflectance improved the spatial extent of light precipitation, as is evidenced by the improved Equitable Threat Score (ETS) across thresholds (except the 0.1 mm threshold) and the reduced False Alarm Rate (FAR). For the U- and V-component winds, temperature, water vapor mixing ratio, DA of visible reflectance generated negligible adjustments as visible reflectance data are insensitive to these non-cloud variables. The newly developed GSI-EnKF-CRTM-Vis DA technique facilitates the ensemble-based DA of satellite visible reflectance with ensemble members initialized with multiple physical parameterization schemes.
- Preprint
(2455 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-4553', Anonymous Referee #1, 26 Nov 2025
-
RC2: 'Comment on egusphere-2025-4553', Anonymous Referee #2, 23 Dec 2025
Opening statement
I thank the authors for sharing their manuscript and for the considerable effort invested in developing and testing a data assimilation framework for geostationary visible reflectance observations. The paper addresses a timely and important topic for the atmospheric data assimilation community, namely the use of visible satellite information to constrain cloud and precipitation processes at mesoscale resolution. The technical developments within the GSI-EnKF-CRTM framework, along with the OSSE-based evaluation, demonstrate a substantial amount of work. The study has clear potential to contribute to ongoing efforts toward all-sky data assimilation and improved exploitation of geostationary satellite observations. The following comments are provided in the spirit of open scientific discussion and are intended to help strengthen the clarity, robustness, and interpretability of the presented results. In its current form, the manuscript requires major revision before it can be considered for publication. In particular, clearer formulation of research objectives, improved description of the OSSE setup plus error assumptions, a more careful interpretation of the results (especially with respect to systematic biases and the limited impact on non-cloud variables), and avoidance of any speculative statements are needed. Addressing these points will substantially strengthen the scientific clarity of the manuscript and improve its suitability for publication. Below, I provide more specific major, minor, and typographic comments to support and facilitate the revision process.
Major comments
1) INTRODUCTION & CONCLUSIONS
- Section 1 (Introduction): The paper would benefit substantially from stating explicit research questions or objectives at the end of the introduction. This would help readers understand what to expect and what progress is being claimed. For example: What is the impact of assimilating visible reflectance/radiances in the GSI–EnKF–CRTM configuration? Please also briefly indicate the approach/methods used to address these questions (OSSE design, cycling strategy, evaluation metrics).
- Section 4 (Conclusions): Section 4 requires substantial revision to address the open issues raised in this review. Please avoid speculation and ensure the conclusions directly answer the research questions/goals proposed in the introduction. The conclusions would also benefit from a clearer discussion of implications of the findings, including (if appropriate) limiting steps toward operational application (computational cost, bias handling, etc.).
- Line 492: The meaning of “intrinsically” is unclear in context; please rephrase.
- Lines 493–494: To me, it’s unclear why the reported sign changes occur and what the implications are. Please add an explanation and link explicitly to the DA update mechanism and the observation operator sensitivity.
2) VIS DA results and their implications
A recurring theme in the results is (i) small impact on non-cloud variables and (ii) large impact on thin clouds. These conclusions need clearer justification and discussion with respect to existing literature.
- Section 2.1 / Lines 145–156: I disagree with the wording that cloud-related control variables “must” be included. VIS observations can be assimilated to update only conventional variables (e.g., temperature/humidity) via ensemble covariances or adjoint sensitivities; cloud/hydrometeor control variables are optional, though they may increase impact substantially. Please revise the text accordingly.
- Line 327 “VIS reflectance DA has a limited effect on non-cloud variables”: Based on the presented results, I am not convinced that VIS DA has a limited or negligible effect on non-cloud variables. Explanation of why the stated behavior occurs is missing. Discuss whether this is consistent with prior studies. This should be explicitly revisited in the conclusions/discussion and placed in the context of existing literature on VIS assimilation.
- Line 332 “for non-cloud variables approximates zero (𝑯n≈0).”: This statement appears to contradict Figure 6, where temperature and moisture do show some adjustments. Please reconcile the text with the evidence shown in the figure (and clarify whether the plotted profiles represent a member or the ensemble mean).
- Lines 379–390 and Figure 9: Figure 9 suggests notable systematic differences between the nature run and the model climatology/distributions. This raises an important DA question: to what extent is the DA correcting systematic biases versus random/displacement errors? Classical DA assumes unbiased errors; if systematic differences prevail, they can affect innovations and lead to suboptimal updates. Please discuss this explicitly and summarize the implications.
- Lines 388–389 (thin-cloud conclusion): The conclusion that VIS DA mainly corrects thin clouds may depend on evaluating the ensemble mean. Please clarify whether this conclusion holds at the member level. How would the conclusion change if you analyzed each member’s increments or departures? Relatedly, how might results differ if the OSSE did not exhibit strong systematic differences between the nature run and the ensemble?
- Line 395 onward: As noted above, this section would benefit from an explicit evaluation/discussion of the extent to which improvements arise from correcting biases (systematic) versus random errors (e.g., displacement or wrong CWP). Please consider adding suitable diagnostics/metrics, and at a minimum discuss this limitation.
- Lines 398–399: The manuscript primarily shows impacts during the assimilation window. Please clarify and show (or discuss) how long-lasting the impact is with the forecast lead time. Have forecasts been verified in addition to the analysis states?
- Line 427: If the statement is correct and supported by results, it raises the question of the persistence of the imposed cloud increments. If temperature/humidity are not adjusted consistently toward saturation, a cloud increment introduced by VIS DA could quickly evaporate after forecast initialization. Please discuss model balance/consistency and the expected persistence of cloud increments.
3) OSSE setup and synthetic observation generation
- Section 2.2.3: It is not sufficiently clear how the synthetic observations were generated. Please add a concise but complete description: what fields from the nature run (ERA5) were used, how CRTM was driven, whether observation noise was added, and how representativeness was handled.
- Line 260 (observation error choice): The observation error specification is critical for any OSSE. The discussion is currently too limited. Please expand: how does the chosen observation error compare to (i) ensemble spread/background error and (ii) typical departure statistics? Did you consider estimating errors using O–B/O–A statistics? Were sensitivity tests performed (even limited)? A spread–error comparison would be very informative and would clarify the relative weighting between background and observations.
- Figure 9 (comparability/resolution): How comparable are ERA5/nature run “truth”/synthetic observations to model/ensemble outputs in terms of effective resolution? Would it be appropriate to super-ob to a common resolution for fair comparison? Please comment and/or justify the chosen verification approach.
4) Use of “significant” and “scientific significance.”
Please use “significant” only when supported by statistical significance testing. In multiple places, the term is used in a qualitative sense, which is misleading for scientific writing. Below are several examples from the manuscript:
- Line 252 (and other occurrences): “significant” is not appropriate unless a statistical test is performed. Please replace with alternatives such as “substantial,” “notable,” or “pronounced,” depending on meaning.
- Line 306: The phrase “scientific significance” is not appropriate here; no statistical evidence is provided. Please reword and frame conclusions as case-study findings.
- Lines 328 and 375: Same issue, please revise terminology accordingly.
Minor comments
- Lines 86–94: This paragraph lacks a clear lead sentence; as a reader, it is unclear why it is introduced here. Consider adding a first sentence explaining why perturbations/multi-physics are used and then reorganizing (potentially moving lines 95–102 earlier).
- Figure 2: Consider using consistent y-axis limits (0–1) in both panels to better highlight solar/viewing-angle impacts on reflectance. It may also help to relate this figure to similar sensitivity illustrations in the literature (e.g., see Geiss et al. 2021 / https://doi.org/10.5194/acp-21-12273-2021)
- Section 2.2.1: Please discuss computational cost: how expensive is CRTM VIS simulation, and is it fast enough for operational application (or what developments are needed)?
- Figure 3 (and other figures): Please use colorblind-friendly colormaps. Avoid using the same colormap within a multi-panel figure when panels show different quantities; this can be confusing. For Figure 3(a), maximum reflectances appear below ~0.6 for a tropical cyclone; please explain whether this is expected given geometry/cloud properties or indicates a limitation in the simulation.
- Table 1 / ensemble: Since the ensemble/OSSE was constructed specifically for this study, it would be important to demonstrate that the ensemble is not severely under- or over-dispersive. Please consider adding diagnostics such as:
- spread vs. error over time (for temperature and/or reflectance/CWP),
- rank histograms (or similar calibration diagnostics),
- or at minimum discuss whether dispersion was evaluated.
- Section 2.2.3 (H operator): Please clarify whether is treated as a linear or nonlinear operator in the EnSRF context. Since the VIS operator is nonlinear, explain whether any linearization is assumed and how this affects the results.
- Line 257: Please verify the assumption that single-observation experiments are independent. Depending on the definition of localization radius (e.g., Gaspari–Cohn half-width vs cutoff), the cutoff distance can exceed the nominal radius.
- Lines 263–278: Please rewrite for clarity; the motivation is understandable after careful reading, but the current phrasing is difficult to follow.
- Figure 6 (analysis diagnostics): Consider whether it would be helpful to show mean and standard deviation of first-guess and analysis departures vs height over all samples to demonstrate whether both biases and errors are reduced by VIS DA. Also clarify in the caption whether the figure shows a single random profile, or whether it is an ensemble mean or a selected member. Please also comment on the representativeness of ERA5 vs model profiles.
- Figure 7: Use a diverging colormap for increments centered at zero (white at zero, red/blue for positive/negative). Consider consistent vmin/vmax across the increment panels to facilitate time-to-time comparison.
- Line 372: The phrase “mismatched spatial patterns” is unclear. Please explain what is meant and why it is attributed to nonlinear model physics rather than, for example, sampling error or representativeness differences.
- Figure 8: Use consistent color scaling across panels (b1–b4) to allow visual comparison.
- Figure 10 (optional): For future studies, consider spatial/ensemble verification (e.g., Brier Score, Fractions Skill Score) to better interpret probabilistic/spatial precipitation performance.
- Lines 420–421: The conclusion of negligible impact on non-cloud variables may depend on using mean profiles. Consider whether evaluating member-wise departures (first guess vs analysis) and then averaging could reveal a clearer signal.
- Lines 433–434: Please clarify how this conclusion is supported by the presented results.
- Figure 12: Panels are difficult to read at the current size. Consider focusing on a subset (e.g., one time) and providing larger panels. It could also help to show analysis/first-guess departures (both mean and standard deviation) rather than only mean bias.
- Figures 13 and 15: Clarify whether panels (c) show the ensemble mean or a selected member. Ensure consistent color scaling or colorbars between Figures 13 and 15. Also consider changing the lower boundary when displaying light precipitation. For me, the threshold of (<10/30 mm) for non-white colors is not optimal, as white incorrectly suggests “no precipitation.” Better use, e.g., 1mm for both figures.
- Figure 14: As mentioned above, spatial verification metrics would be valuable in future work.
Language/Formatting comments
- Line 251: Remove the word “observation” (duplicate/awkward phrasing).
- Figure 8 caption: “of of” → “of”.
Citation: https://doi.org/10.5194/egusphere-2025-4553-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 263 | 150 | 24 | 437 | 16 | 15 |
- HTML: 263
- PDF: 150
- XML: 24
- Total: 437
- BibTeX: 16
- EndNote: 15
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Title: Assimilating Geostationary Satellite Visible Reflectance Data: developing and testing the GSI-EnKF-CRTM-Vis technique
Authors: Luo et al.
Assimilating satellite visible reflectance data has been a great challenge for NWP community. The manuscript investigates the assimilation of visible reflectance data onboard the geostationary satellite Himawari-8, using CRTM as forward operator and GSI as assimilation platform. Within an OSSE framework, results show that assimilation of the visible reflectance can effectively correct CWP, and also improve the spatial extent of light precipitation, although impacts on non-cloud variables are negligible. The manuscript is well written. I have several specific comments as below.
Minor comments: