the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Forward Modeling of Spaceborne Active Radar Observations
Abstract. Accurate forward models, particularly radiative transfer models, are essential for the assimilation of both passive and active satellite observations in modern data assimilation frameworks. The Community Radiative Transfer Model (CRTM), widely used in the assimilation of satellite data within numerical weather prediction systems, especially in the United States, has recently been expanded to include an active radar module. This study assesses the new module across multiple radar frequencies using observations from the Earth Clouds, Aerosols and Radiation Explorer Cloud Profiling Radar (EarthCARE CPR), the CloudSat CPR, and the Global Precipitation Measurement Dual-Frequency Precipitation Radar (GPM DPR).
Simulated radar reflectivities were compared with the spaceborne measurements to assess the impacts of hydrometeor profiles, particle size distributions (PSDs), and frozen hydrometeor habits. The results indicate that both PSD selection and particle shape significantly influence the simulated reflectivities, with snow particle habits introducing differences of up to 4 dBZ in W-band comparisons. For the GPM DPR, reflectivities simulated using the Thompson PSD showed better agreement with observations compared to those using the Abel PSD. The findings highlight the strong sensitivity of forward radar simulations to microphysical assumptions, underscoring their potential to improve the assimilation of spaceborne radar data in NWP models.
- Preprint
(1662 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Referee comment', Alan Geer, 04 Nov 2025
-
RC2: 'Comment on egusphere-2025-4372', Anonymous Referee #2, 16 Nov 2025
The paper “Forward Modeling of Spaceborne Active radar observations” present the implementation of a forward operator for spaceborne radar reflectivity observations within the CRTM model. The operator is evaluated for three types of spaceborne observing systems covering a broad frequency range (Ku, Ka, and W). The overall objective of the study is appealing: providing a spaceborne reflectivity simulator within an operational radiative transfer framework is a valuable step toward future data assimilation applications.
The paper is well written and easy to follow. However, I think the order of the content in section 2 could be re-arranged and more extended with more details (see below). Besides, contrary to the approach used in other papers about the validation of spaceborne reflectivities, to avoid uncertainties in the prediction of hydrometeors using NWP models, the authors preferred to validate the simulator using hydrometeor profiles that have been retrieved using observations. I think it is a nice approach. However, details about the retrievals are not explained in the current manuscript and should be added in a future version of the manuscript.
Abstract:
- L4: “active radar module”: I think “active” can be removed as a radar is always an active sensor. (same for the title)
Introduction:
- L 26 to L30: I recommend the authors to add the appropriate references on the current existing spaceborne missions.
- PMR onboard FY3G is not mentioned as current spaceborne mission (see for instance: https://rmets.onlinelibrary.wiley.com/doi/10.1002/qj.4964). The authors could also add future planned spaceborne radar observing systems (e.g. WIVERN, INCUS)
- L36: The authors could also cite the work of Fielding et al. (2021)
Section 2 :
- « forward model in DA » here is a misleading title as the forward operator is then never applied to any NWP model in this paper, and there are no DA experiments in this paper. Therefore, I think the introduction of section 2 is out of the scope of the current paper (especially with the general equation of the 3DVar).
- “Radar forward model” would better fit into after the CRTM section as the radar solver is included in CRTM
- Section 2.1: it is not written how the authors account for the subgrid variability of hydrometeors, especially if the final goal is to apply this operator to global NWP models. Also, the authors don’t mention the dielectric properties, which are very important to compute scattering properties.
Section 3:
- L115: it would be informative to add the radar frequencies of AOS
- L122-124: I personally disagree with the authors. Global dataset of hydrometeors exists with NWP models, and have been used in many studies to validate radar forward operators (Di Michele et al. 2012; Fielding et al. 2021; Ikuta et al. 2016, David et al. 2025, etc…)
- L123: The authors should explain the approximations made in the retrieval as this is crucial to explain the results of the sensitivity study to the PSD, and to the particle shapes (section 4). For example, is there any assumption on the PSD in the retrieval; which could then explain why the results are closer if the Thomson PSD is used in CRTM (Fig 4)?
- L134-136: In my opinion the sentence is a bit too long
- L173-176. The authors should add a reference when they argue that the graupel signal is similar than the one of ice, or to demonstrate this point. Indeed, this sentence is a bit counterintuitive as the properties of both species are different in the nature (mass, PSDs, dielectric properties).
Section 4:
- The authors should explain why the sensitivity study is not performed at the three frequencies (for the PSDs as well as on the shapes).
- L193: Another point is that there is usually a spatial and temporal mismatch between NWP models and radar observations, which makes difficult to disentangle errors in the forward operator from spatial and temporal mismatch (see for instance section 4 in Borderies et al. 2018)
- L197-200: the authors should add the end of the period of study.
- In the text, the units of the comparisons of table 1 should be in dB, and not dBZ (a difference between two dBZ is in dB).
- Legend of Table 1: what are the shapes used for the simulations?
- L221: Is there any reference for the retrieval?
- L222: The authors should explain why the simulations do not account for rain effect
- L260: the authors should explain why the simulations do not account for frozen hydrometeors. It is confusing for the reader.
- L266:”w-band”-> “W-band”
- L267: in that case, the authors should say that they use the corrected reflectivity in the observations.
- I think that this result about the larger differences for smaller content is very interesting. Is there any paper in the literature to support this result, or any physical argument to support these findings? Is this larger difference at smaller contents only due to the shape, or also to the mass-diameter relationships which is associated to each shape? I would recommend the authors to add some more explanations about this point.
- L290: In my opinion the first perspective would be to test this forward operator on NWP model fields (at least before estimating any observation errors for DA).
Citation: https://doi.org/10.5194/egusphere-2025-4372-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 171 | 44 | 14 | 229 | 12 | 15 |
- HTML: 171
- PDF: 44
- XML: 14
- Total: 229
- BibTeX: 12
- EndNote: 15
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Review of “Forward Modeling of Spaceborne Active Radar Observations”
This manuscript evaluates the new CRTM operator for radar reflectivity by comparing simulated and actual observations from three existing and past space radars at Ku, Ka and W bands. There is also some exploration of the sensitivity to microphysical assumptions in the new model. To make the simulations requires profiles of atmospheric variables and hydrometeor concentrations at each observation location. In this study, these are obtained from the level 2 retrieval products for each radar, which are based on the observations themselves (perhaps in some cases using additional information from colocated passive sensors). Hence the work is not an independent validation, but rather it tests, roughly, the equivalence between the new CRTM forward model and the forward models used in the retrievals made by the different data providers.
The work is well written and presented. The validation of the CRTM radar operator along with an exploration of its sensitivities to microphysical assumptions is significant and of interest to the community. The CRTM radar model, being supported, open source and publicly available, is likely to be used in many future studies as well as in weather models. However, the study is currently incomplete because it is lacking any comparison between CRTM and the radar models being used by the level 2 data providers. This is the first place to look to explain the differences between the CRTM simulations and the observations. There are a number of other smaller issues to be addressed as well.
Main points
The methodology of this study is slightly circular, as explained above. Although this approach is justifiable for a model validation study, it needs to be more clearly flagged to readers, and its implications followed up in more detail. In this context, if the the CRTM radar operator was identical to the forward model used in the level 2 retrievals, and if no information had been lost, then we would hope for an almost exact match between the CRTM simulations and observations. Already, the level of agreement between simulations and observations in terms of spatial structures is nearly exact, for example in Figure 1, which clearly shows that despite other passive data being used in the CloudSat retrievals, they must be very much dominated by the reflectivity. Despite the close spatial agreement between simulations and observations, there are systematic differences in the reflectivities which are of interest and are most likely explained by differing microphysical assumptions in the two forward models, or by different model methodology (such as treatments for single or multiple scattering, or cloud overlap and inhomogeneity within the beam). Another possible explanation is if some of the reflectivity-generating hydrometeor mass has been lost between the retrieval and the input to the simulation (for example, if the snow mass has been somehow discarded in the DPR comparison, which only uses cloud and rain hydrometeor profiles). I would hope that all results could be explored in more detail in this framework, following these suggestions:
1) At a high level (e.g. abstract, introduction, conclusions) the work should indicate more clearly that the source of the hydrometeor profiles is the observations themselves, and that the study can be more clearly seen as a consistency validation against the existing radar models employed in the level 2 retrievals.
2) The work needs to detail more exactly the radar forward modelling being used by the data providers, especially the basic assumptions such as single particle scattering model or habit and particle size distribution, and any more advanced methods used for multiple scattering, cloud overlap or inhomogeneity. Also a little more information needs to be provided on the CRTM configuration in these areas. I imagine it should be possible to provide a new table comparing exactly the configuration of CRTM to the configurations of the forward models used in the Cloudsat, EarthCARE and DPR retrievals.
3) The work needs to more clearly summarise the sources of hydrometeor mass being used in each comparison, since it differs. Again, this would be very helpful if presented in tabular form. It is also important to double check the retrieval methods and see if any hydrometeor mass has been missed. For the DPR example, where CRTM is only supplied with liquid water and rain profiles, if all the reflectivity above the melting layer has been attributed to supercooled droplets in the DPR retrieval, then even if in practice some of this reflectivity has been generated by frozen particles, it would not matter for the consistency validation, as long as the rain and cloud scattering models were identical. But if the DPR retrieval represents frozen particles separately, then mass may have gone missing in the comparison. Given the relatively good agreement shown in Figure 3, there is unlikely to be missing mass in the DPR comparison, but it would be important to be sure, and this would help reduce the number of knowledge gaps in the study.
4) Based on the comparisons listed at points 2 and 3 above, it should be possible to explain many of the systematic differences between simulations and observations. For example, the conclusion that the Thompson PSD worked better the Abel PSD in simulating DPR is not useful on its own (this is not an independent validation) but it is useful when compared to the assumptions used by the DPR retrieval team.
5) Since there is a large body of existing work on radar simulators, much of which has fed into the forward models used for the Cloudsat, EarthCARE and DPR retrievals, it would be good to acknowledge this and cover it in the introduction too.
Minor points
1) Line 15 - radiative transfer models also critically include the surface, not just the atmosphere
2) Line 35 referencing the ZmVar model: there is a more recent publication that could be useful to reference here: “Direct 4D‐Var assimilation of space‐borne cloud radar reflectivity and lidar backscatter. Part I: Observation operator and implementation, by MD Fielding, M Janisková - Quarterly Journal of the Royal Meteorological Society, 2020”. Here, especially the two-column treatment of cloud overlap for the attenuation calculation is quite novel.
3) Line 72, the radar equation needs a little more explanation, since this refers more precisely to the equivalent radar reflectivity (or reflectivity factor) rather than the actual reflectivity. Here there should also be some support from prior textbooks or papers: for example Grant Petty’s book has a good introduction of these concepts.
4) Line 225: “Discrepancies between observed and simulated reflectivities can be attributed to a combination of inaccuracies in the input profiles, forward model errors, and observational biases”. Based on discussions above, I would not expect observational biases to have any relevance here.
5) Line 236, “CloudSat shows overall a better agreement. For both instruments, the simulations tend to overestimate reflectivity at lower values but underestimate it at higher reflectivity values.” This discussion needs to be more carefully framed because the CloudSat results are within tropical cyclones whereas the EarthCARE results are global. The difference in agreement could be purely due to the different datasets and have nothing to do with the retrievals or the sensors themselves.
6) Figure 2a is hard to reconcile with Fig 1, which shows simulations in convective cores reaching 30 dBZ, compared to observations rarely exceeding 20 dBZ. Figure 2a shows the opposite and the text concludes that the simulations are too low, not too high.
7) E.g. line 246 “included liquid and rain content”. Partly repeating earlier main points, it would have been much easier to follow the results if the basic details of the CRTM simulations had been presented and summarised, ideally in a table. This would include basic details like particle shape and PSD choices for each hydrometeor type in each simulation, along with the hydrometeor types being used in each case.
8) Figure 3 and others show a freezing level on the observations but not simulations, which makes it harder to compare the two. To make things more visually consistent and comparable, the line should ideally be on both panels.
9) Line 267 “we focus on non-attenuated reflectivity”. This is a bit confusing as, as far as I could see, the text has not yet clearly stated whether any of the results are based on attenuated or non-attenuated reflectivity. This needs a clearer signposting earlier in the text. I also don’t like the term “non-attenuated reflectivity” relating to observations, because what these actually are is “attenuation corrected reflectivity”.
10) Figure 4: the difference in the x-axis ranges makes comparison very hard. Please ensure consistency across all the panels, if possible.
11) Line 267 “do not account for frozen hydrometeors”. This is picking up on a point made earlier, that the more important question is whether the retrievals attempt to represent frozen hydrometeors separately, or whether any reflectivity above the freezing level is assumed to be explained by supercooled water droplets (assuming also that melting particles are not separately represented either).
Technical and grammar
1) The second part of the final sentence in the abstract seems to convey no concrete meaning - please rephrase for clarity or remove: “underscoring their potential to improve the assimilation of spaceborne radar data in NWP models”. Exactly what has the potential to improve assimilation is not clear.
2) Line 22 CRTM is a “pivotal collaborative model”. This doesn’t seem to convey much scientific meaning and should be removed or explained more clearly.
3) line 28: space radars “provide vertical profiles of cloud and precipitation structures” - this language is too loose. They provide reflectivity profiles giving information on vertical structures of cloud and precipitation.
4) line 50: “supports an operational environment” does not seem to convey a clear meaning and could be removed or rephrased.
5) equation 1: please use a more self-consistent mathematical notation both here and throughout the paper, and most critically of all, please explain what it is. Here the vector y is bold and the vector x is non-bold italic, for example. The use of bold capitals for both the observation operator H (presumably a nonlinear function) and the background error covariance B (a matrix) is potentially confusing. Typically bold capital H is used for the Jacobian matrix of the nonlinear operator H, which itself is usually written in non-bold italic capitalised or similar.
6) equation 4 and others: please explain or remove the comma notation (for multiplication?) which does not seem to be very standard.
7) Line 89 V(D) is not strictly the particle volume, in the case of non-spherical particles.
8) line 240: “as a result … DPR did not capture observations directly over the storm centre” is incorrect. The swath of DPR made it more likely to capture the storm centres (as opposed to nadir viewing sensors like CloudSat). In any case this is a separate message and requires a separate sentence for clarity.