the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Synergistic approach of hydrometeor retrievals: considerations on radiative transfer and model uncertainties in a simulated framework
Abstract. In cloudy situations, infrared and microwave observations are complementary, with infrared observations being sensitive to the small cloud droplets and ice particles and microwave observations sensitive to precipitation. This complementarity can lead to fruitful synergies in precipitation science (e.g. Kidd and Levizzani, 2022). However, several sources of errors do exist in the treatment of infrared and microwave data that could prevent such synergy. This paper studies several of these sources to estimate their impact on retrievals. To do so, simulations from the radiative transfer model RTTOV v13 are used to build simulated observations. Indeed, we make use of a fully simulated framework to explain the impacts of the identified errors. A combination of infrared and microwave frequencies is built within a Bayesian inversion framework. Synergy is studied using different experiments: (i) with several sources of errors eliminated; (ii) with only one source of errors considered at a time; (iii) with all sources of errors together. The derived retrievals of frozen hydrometeors for each experiment are examined in a statistical study of fifteen days in summer and fifteen days in winter over the Atlantic ocean. One of the main outcomes of the study is that the combination of infrared and microwave frequencies takes advantage of both spectral range strengths leading to accurate retrievals. Each source of error has more or less impact depending on the type of hydrometeor. Another outcome of the study is that even though the errors may decrease the magnitude of benefits generated by the combination of infrared and microwave frequencies, in all cases explored, their combination remains beneficial.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(8289 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8289 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-446', Anonymous Referee #1, 13 Apr 2023
Review of Villeneuve et al. – Synergistic approach of hydrometeor retrievals: considerations on radiative transfer and model uncertainties in a simulated framework
The authors present a synthetic study of retrieval synergy between microwave, infrared, and sub-mm observations for constraining ice hydrometeors. This paper is well-written and relevant for publication in this journal. The study is well-constructed, the methodology is explained well, and reasonable conclusions are drawn about the synergistic value of these observations for frozen hydrometeors, while being clear about the caveats and shortcomings of this synthetic approach. My recommendation is for publication after some minor corrections.
One area where the manuscript is a little lacking regards the context of other synergistic studies in the literature. For instance, there are recent studies examining the combined use of MWI and ICI (https://amt.copernicus.org/articles/13/4219/2020/) and also active sensors, including using real observations (https://amt.copernicus.org/articles/15/677/2022/). As their conclusions regarding the importance of particle shape and RT errors in general are similar to the ones of this study, it is perhaps worth mentioning in the discussion section or the introduction to provide some context for readers. Other studies have also probed the importance of microphysical assumptions for different wavelengths (https://amt.copernicus.org/articles/14/5369/2021/ https://amt.copernicus.org/articles/13/501/2020/), again with conclusions that seem compatible with this study. There are also Bayesian-based studies on ice hydrometeor retrieval synergy (https://amt.copernicus.org/articles/15/927/2022/), including several others that examine radar/radiometer synergy from CloudSat and GPM. The authors’ paper is more focused on eventual assimilation, but some context on retrieval focussed studies could still be helpful. The papers listed above are just from AMT, and surely there are others elsewhere.
Specific comments:
L11 – “takes advantage of both spectral range strengths” could perhaps be rewritten as “takes advantage of the strengths of both spectral ranges”
L13 – This last sentence is not specific enough and may need to be reworded. For instance, does “the errors” mean “the radiative transfer and numerical modelling errors”? Does “their combination” mean the combination of different sensors? It’s also not clear that this is true in “all cases explored” because the graupel combined retrieval performed worse than MWI when it came to graupel.
L21 – “a significant information content” is quite vague, suggest rewording
L26 – Worth spelling out what all-sky means for readers
L33 – Suggest changing “at discussing” to “to explore”
L50 – This is nitpicking, but sub-mm is greater than 300GHz, so ICI will technically measure MW and sub-mm wavelengths
L51 – Again a technicality, but there have been short-lived sensors measuring at sub-mm, and AWS might be launched before ICI, so this statement could be toned down.
Table 2 – It’s a bit confusing how “OBS FG” is shown. Does this mean OBS in rows and FG in columns? Could make this clearer.
L113 – See tables 4 & 5, presumably?
Tables 3, 4, 5 – Would it be possible to combine into one table? This would make it easier to compare values across sensors.
L145 – Tables 6, 7, & 8, presumably?
Tables 6, 7, 8 – Same as above, could these be combined?
L211 – It would be helpful to provide more detail here to explain exactly why this validation comparison is done. Right now it feels quite implicit and the reasoning is split up (L234, L355), but it would be helpful to spell out exactly why the validation was performed at the beginning of this section.
Figures 2 & 3 – y-axis should be STD and mean?
Figures 4, 6, 8 – Here the y-axis could be reasonably cut off at 100hPa, as presumably the significance differences are spurious noise above this level.
L271 – Reword “instrumental synergy” to something like “synergy of the instruments”
Figures 5, 7 9 – Here the x-axis label of “abs error” was quite confusing for me. Isn’t this the difference definition given in Section 2.5? Also a typo in first panel of ‘mTR’ rather than ‘mRT’
L306 – Why is this given halfway through the results section? It would make much more sense at the beginning of Section 4.
L318 – Worth mentioning that the FCI curves are again absent in Fig. 9, as stated for Fig. 7
Citation: https://doi.org/10.5194/egusphere-2023-446-RC1 - AC1: 'Reply on RC1', Ethel Villeneuve, 14 Nov 2023
-
RC2: 'Comment on egusphere-2023-446', Anonymous Referee #2, 17 Oct 2023
Review comments for “Synergistic approach of hydrometeor retrievals: considerations on radiative transfer and model uncertainties in a simulated framework” by Villeneuve et al.
This work thoroughly assessed the benefits of combining passive thermal infrared (TIR), sub-millimeter (sub-mm) and microwave (MW) instruments in retrieving frozen hydrometeors (cloud ice, snow, and graupel) using a Bayesian retrieval framework and a regional model outputs as the input “truth” for retrieval and validation. What’s more, this work also evaluated two retrieval error sources from radiative transfer model microphysics assumption (mRT experiments) and from model microphysics parameterization schemes (mMOD experiments) that might lead to degradation of the synergy. Three upcoming spaceborne instruments (FCI, ICI and MWI) are brought in for specific channel frequency settings, but their mis-match footprint, view-angle, etc., were not considered yet in this paper. The major conclusion of this work is that synergy overall is better than using any of the three instruments individually for retrieving cloud ice and snow profiles, but not necessarily for graupel. Retrieval performance (i.e., error) tends to be more sensitive to model microphysics scheme parameters than to which type of ice particle shape that is chosen (again, opposite for graupel). The discussions speculate some possible explanations for the behaviors error metrics (mainly standard deviation of inversion error compared with using single instrument). Some other error sources that are important but not considered in this work are laid out in the end.
Overall I think this is a solid paper that involves extensive efforts and a sound framework for testing and validation. Therefore, I strongly support the publication of this work eventually. However, I do think there are some important issues that need to be clarified, and some more potential experiments need to be considered. I wouldn’t put a “major revision” as my recommendation, as the idea, methodology, execution and result presentation do not have serious issues and worth some great appreciation.
The overarching goals are a bit ambitious in design: trying to assess two important problems (i.e., synergy, and synergy sensitivity) in one paper. I would rather consider separating into two companion papers so each one can focus on one problem which gets a chance to be more elaborated. Right now the first 14 pages are spent for describing experiment settings, while Page 15 – 20 are for presenting the results, which followed with one page very brief discussion. I feel it’s not an ideal paper structure. Below are my major concerns:
(1) For assessing the synergy benefit, CA and SCP are introduced in Equation (5) and (8) for IR and MW, respectively. Maybe these are some standard parameters that data assimilation people used a lot already, but for a retrieval person like me, I lost clue on why using these two metrics, what are their physical meanings, and why it’s inconsistent between TIR and MW. Substantial explanations and discussions are needed here. For example, in Fig. 2 why the average of errors are small positive for noERR and mRT, larger and negative for mMOD, but the standard deviation are comparable in size? Shouldn’t the CA error increases for thicker clouds (I guessed from the grey bar, which I assume correspond to the number of cases)? Would you concern about using STD difference to assess your synergy performance when the bias are not even the same sign?
(2) For mMOD experiments, I never understand how tuning so many parameters can give you one final assessment value at the end? Did you carry out two sets of experiments, one using the lower values in your Table 11, one using the higher bound values, and then computing their difference against your noERR results? As acknowledged in this paper, it is expected to see significant compensations among different parameters. It worths at least one paragraph here or better an appendix section to describe details about mMOD experiment settings.
(3) I’m having some issue with the settings of mRT experiments, in particular, the selection of snow particle shape for noERR and mRT. 183 GHz and sub-mm channels are particularly sensitive to snow particle shape, and many previous observations using limited field campaigns or satellite observations have demonstrated that it’s more proper to use “Evans snow” or “Liu’s sector snow” for snow, and the largest discrepancies come from “soft spheroid” (e.g., Ekelund et al., 2020; Gong et al., 2021). The two snow shapes usually produce quite similar results for sub-mm and MW channels. This is the part that I feel uncomfortable for the settings and suggest to change. For “graupel”, usually we use “8-column aggregates” in ARTS, but I guess that’s not a serious issue as I expect sub-mm and high-frequency MW bands to be saturated to graupels quickly anyway.
I’m not surprised to see graupel retrieval error is not sensitive to mMOD, and agree with the authors that cumulus parameterization scheme is probably that matters for graupel instead of microphysics details in cloud ice and snow (still surprised me that tuning entrainment rate seems to not work). However, it’s also worth noting in the paper that none of these channels are really sensitive to graupel, so it’s expected that the synergy of the three would get things worse. I might have overlooked this point in the manuscript, or I feel it is not emphasized enough in the discussion related to Fig. 8 & Fig. 9.
(4) Another relatively important issue that was overlooked in mRT experiment is the assumption of particle size distribution (PSD), which matters a lot for sub-mm and MW channels. There is a variety of PSD choices in ARTS, and I think it worths to be considered in the mRT experiments.
Minor issues:
Fig. 5, 7 & 9: I’d strongly suggest you to include a mean IWC profile with standard deviation as a reference panel in addition to the current ones. This is helpful for readers to at least visually assess the percentage error of improvements. For example, I found it’s very interesting to see mRT improves cloud ice retrieval when you focus on snow (Fig. 7a, cyan vs. green). By the way, mRT seems to be mis-spelled as “mTR” in all three figure sub-titles.
Section 5.3: Can’t agree you more with your point #3 and #4. For #3, please consider citing Barlakas and Eriksson (2020). It’s a nice paper focusing on sub-grid variability for sub-mm radiometer retrievals. For models with 5 km resolution, it’s comparable to footprint size of these sensors but facing similar order of sub-grid variability. For #4, it is real when it comes to the real algorithm design for combined algorithms, which worths another paper to discuss and #3 and #4 are tightly tied. (guess this is just my comment)
References:
Barlakas and Eriksson (2020): https://doi.org/10.3390/rs12030531
Ekelund et al. (2020): https://doi.org/10.5194/amt-13-501-2020
Gong et al. (2021): https://doi.org/10.5194/essd-13-5369-2021
Citation: https://doi.org/10.5194/egusphere-2023-446-RC2 - AC2: 'Reply on RC2', Ethel Villeneuve, 14 Nov 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-446', Anonymous Referee #1, 13 Apr 2023
Review of Villeneuve et al. – Synergistic approach of hydrometeor retrievals: considerations on radiative transfer and model uncertainties in a simulated framework
The authors present a synthetic study of retrieval synergy between microwave, infrared, and sub-mm observations for constraining ice hydrometeors. This paper is well-written and relevant for publication in this journal. The study is well-constructed, the methodology is explained well, and reasonable conclusions are drawn about the synergistic value of these observations for frozen hydrometeors, while being clear about the caveats and shortcomings of this synthetic approach. My recommendation is for publication after some minor corrections.
One area where the manuscript is a little lacking regards the context of other synergistic studies in the literature. For instance, there are recent studies examining the combined use of MWI and ICI (https://amt.copernicus.org/articles/13/4219/2020/) and also active sensors, including using real observations (https://amt.copernicus.org/articles/15/677/2022/). As their conclusions regarding the importance of particle shape and RT errors in general are similar to the ones of this study, it is perhaps worth mentioning in the discussion section or the introduction to provide some context for readers. Other studies have also probed the importance of microphysical assumptions for different wavelengths (https://amt.copernicus.org/articles/14/5369/2021/ https://amt.copernicus.org/articles/13/501/2020/), again with conclusions that seem compatible with this study. There are also Bayesian-based studies on ice hydrometeor retrieval synergy (https://amt.copernicus.org/articles/15/927/2022/), including several others that examine radar/radiometer synergy from CloudSat and GPM. The authors’ paper is more focused on eventual assimilation, but some context on retrieval focussed studies could still be helpful. The papers listed above are just from AMT, and surely there are others elsewhere.
Specific comments:
L11 – “takes advantage of both spectral range strengths” could perhaps be rewritten as “takes advantage of the strengths of both spectral ranges”
L13 – This last sentence is not specific enough and may need to be reworded. For instance, does “the errors” mean “the radiative transfer and numerical modelling errors”? Does “their combination” mean the combination of different sensors? It’s also not clear that this is true in “all cases explored” because the graupel combined retrieval performed worse than MWI when it came to graupel.
L21 – “a significant information content” is quite vague, suggest rewording
L26 – Worth spelling out what all-sky means for readers
L33 – Suggest changing “at discussing” to “to explore”
L50 – This is nitpicking, but sub-mm is greater than 300GHz, so ICI will technically measure MW and sub-mm wavelengths
L51 – Again a technicality, but there have been short-lived sensors measuring at sub-mm, and AWS might be launched before ICI, so this statement could be toned down.
Table 2 – It’s a bit confusing how “OBS FG” is shown. Does this mean OBS in rows and FG in columns? Could make this clearer.
L113 – See tables 4 & 5, presumably?
Tables 3, 4, 5 – Would it be possible to combine into one table? This would make it easier to compare values across sensors.
L145 – Tables 6, 7, & 8, presumably?
Tables 6, 7, 8 – Same as above, could these be combined?
L211 – It would be helpful to provide more detail here to explain exactly why this validation comparison is done. Right now it feels quite implicit and the reasoning is split up (L234, L355), but it would be helpful to spell out exactly why the validation was performed at the beginning of this section.
Figures 2 & 3 – y-axis should be STD and mean?
Figures 4, 6, 8 – Here the y-axis could be reasonably cut off at 100hPa, as presumably the significance differences are spurious noise above this level.
L271 – Reword “instrumental synergy” to something like “synergy of the instruments”
Figures 5, 7 9 – Here the x-axis label of “abs error” was quite confusing for me. Isn’t this the difference definition given in Section 2.5? Also a typo in first panel of ‘mTR’ rather than ‘mRT’
L306 – Why is this given halfway through the results section? It would make much more sense at the beginning of Section 4.
L318 – Worth mentioning that the FCI curves are again absent in Fig. 9, as stated for Fig. 7
Citation: https://doi.org/10.5194/egusphere-2023-446-RC1 - AC1: 'Reply on RC1', Ethel Villeneuve, 14 Nov 2023
-
RC2: 'Comment on egusphere-2023-446', Anonymous Referee #2, 17 Oct 2023
Review comments for “Synergistic approach of hydrometeor retrievals: considerations on radiative transfer and model uncertainties in a simulated framework” by Villeneuve et al.
This work thoroughly assessed the benefits of combining passive thermal infrared (TIR), sub-millimeter (sub-mm) and microwave (MW) instruments in retrieving frozen hydrometeors (cloud ice, snow, and graupel) using a Bayesian retrieval framework and a regional model outputs as the input “truth” for retrieval and validation. What’s more, this work also evaluated two retrieval error sources from radiative transfer model microphysics assumption (mRT experiments) and from model microphysics parameterization schemes (mMOD experiments) that might lead to degradation of the synergy. Three upcoming spaceborne instruments (FCI, ICI and MWI) are brought in for specific channel frequency settings, but their mis-match footprint, view-angle, etc., were not considered yet in this paper. The major conclusion of this work is that synergy overall is better than using any of the three instruments individually for retrieving cloud ice and snow profiles, but not necessarily for graupel. Retrieval performance (i.e., error) tends to be more sensitive to model microphysics scheme parameters than to which type of ice particle shape that is chosen (again, opposite for graupel). The discussions speculate some possible explanations for the behaviors error metrics (mainly standard deviation of inversion error compared with using single instrument). Some other error sources that are important but not considered in this work are laid out in the end.
Overall I think this is a solid paper that involves extensive efforts and a sound framework for testing and validation. Therefore, I strongly support the publication of this work eventually. However, I do think there are some important issues that need to be clarified, and some more potential experiments need to be considered. I wouldn’t put a “major revision” as my recommendation, as the idea, methodology, execution and result presentation do not have serious issues and worth some great appreciation.
The overarching goals are a bit ambitious in design: trying to assess two important problems (i.e., synergy, and synergy sensitivity) in one paper. I would rather consider separating into two companion papers so each one can focus on one problem which gets a chance to be more elaborated. Right now the first 14 pages are spent for describing experiment settings, while Page 15 – 20 are for presenting the results, which followed with one page very brief discussion. I feel it’s not an ideal paper structure. Below are my major concerns:
(1) For assessing the synergy benefit, CA and SCP are introduced in Equation (5) and (8) for IR and MW, respectively. Maybe these are some standard parameters that data assimilation people used a lot already, but for a retrieval person like me, I lost clue on why using these two metrics, what are their physical meanings, and why it’s inconsistent between TIR and MW. Substantial explanations and discussions are needed here. For example, in Fig. 2 why the average of errors are small positive for noERR and mRT, larger and negative for mMOD, but the standard deviation are comparable in size? Shouldn’t the CA error increases for thicker clouds (I guessed from the grey bar, which I assume correspond to the number of cases)? Would you concern about using STD difference to assess your synergy performance when the bias are not even the same sign?
(2) For mMOD experiments, I never understand how tuning so many parameters can give you one final assessment value at the end? Did you carry out two sets of experiments, one using the lower values in your Table 11, one using the higher bound values, and then computing their difference against your noERR results? As acknowledged in this paper, it is expected to see significant compensations among different parameters. It worths at least one paragraph here or better an appendix section to describe details about mMOD experiment settings.
(3) I’m having some issue with the settings of mRT experiments, in particular, the selection of snow particle shape for noERR and mRT. 183 GHz and sub-mm channels are particularly sensitive to snow particle shape, and many previous observations using limited field campaigns or satellite observations have demonstrated that it’s more proper to use “Evans snow” or “Liu’s sector snow” for snow, and the largest discrepancies come from “soft spheroid” (e.g., Ekelund et al., 2020; Gong et al., 2021). The two snow shapes usually produce quite similar results for sub-mm and MW channels. This is the part that I feel uncomfortable for the settings and suggest to change. For “graupel”, usually we use “8-column aggregates” in ARTS, but I guess that’s not a serious issue as I expect sub-mm and high-frequency MW bands to be saturated to graupels quickly anyway.
I’m not surprised to see graupel retrieval error is not sensitive to mMOD, and agree with the authors that cumulus parameterization scheme is probably that matters for graupel instead of microphysics details in cloud ice and snow (still surprised me that tuning entrainment rate seems to not work). However, it’s also worth noting in the paper that none of these channels are really sensitive to graupel, so it’s expected that the synergy of the three would get things worse. I might have overlooked this point in the manuscript, or I feel it is not emphasized enough in the discussion related to Fig. 8 & Fig. 9.
(4) Another relatively important issue that was overlooked in mRT experiment is the assumption of particle size distribution (PSD), which matters a lot for sub-mm and MW channels. There is a variety of PSD choices in ARTS, and I think it worths to be considered in the mRT experiments.
Minor issues:
Fig. 5, 7 & 9: I’d strongly suggest you to include a mean IWC profile with standard deviation as a reference panel in addition to the current ones. This is helpful for readers to at least visually assess the percentage error of improvements. For example, I found it’s very interesting to see mRT improves cloud ice retrieval when you focus on snow (Fig. 7a, cyan vs. green). By the way, mRT seems to be mis-spelled as “mTR” in all three figure sub-titles.
Section 5.3: Can’t agree you more with your point #3 and #4. For #3, please consider citing Barlakas and Eriksson (2020). It’s a nice paper focusing on sub-grid variability for sub-mm radiometer retrievals. For models with 5 km resolution, it’s comparable to footprint size of these sensors but facing similar order of sub-grid variability. For #4, it is real when it comes to the real algorithm design for combined algorithms, which worths another paper to discuss and #3 and #4 are tightly tied. (guess this is just my comment)
References:
Barlakas and Eriksson (2020): https://doi.org/10.3390/rs12030531
Ekelund et al. (2020): https://doi.org/10.5194/amt-13-501-2020
Gong et al. (2021): https://doi.org/10.5194/essd-13-5369-2021
Citation: https://doi.org/10.5194/egusphere-2023-446-RC2 - AC2: 'Reply on RC2', Ethel Villeneuve, 14 Nov 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
391 | 150 | 32 | 573 | 28 | 24 |
- HTML: 391
- PDF: 150
- XML: 32
- Total: 573
- BibTeX: 28
- EndNote: 24
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Ethel Villeneuve
Philippe Chambon
Nadia Fourrié
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8289 KB) - Metadata XML