the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Climate response to off-equatorial stratospheric sulfur injections in three Earth System Models – Part 1: experimental protocols and surface changes
Abstract. There is now a substantial literature of climate model studies of equatorial or tropical stratospheric SO2 injections that aim to counteract the surface warming produced by rising concentrations of greenhouse gases. Here we present the results from the first systematic intercomparison of climate responses in three Earth System Models where the injection of SO2 occours at different latitudes in the lower stratosphere. Our aim is to determine commonalities and differences between the climate model responses in terms of the distribution of the optically reflective sulfate aerosols produced from the oxidation of SO2, and in terms of the surface response to the resulting reduction in solar radiation. A focus on understanding the contribution of characteristics of models transport alongside their microphysical and chemical schemes, and on evaluating the resulting stratospheric responses in different models is given in the companion paper (Bednarz et al., 2022). The goal of this exercise is not to evaluate these single point injection simulations as stand-alone proposed strategies to counteract global warming; instead we determine sources and areas of agreement and uncertainty in the simulated responses and, ultimately, the possibility of designing a comprehensive intervention strategy capable of managing multiple simultaneous climate goals through the combination of different injection locations. We find large disagreements between GISS-E2.1-G and the CESM2-WACCM6 and UKESM1.0 models regarding the magnitude of cooling per unit of aerosol optical depth (AOD) produced, from 4.7 K per unit of AOD in CESM2-WACCM6 to 16.7 K in the GISS-E2.1-G version with modal aerosol microphysics. By normalizing the results with the global mean response in each of the models, and thus assuming that the amount of SO2 injected is a free parameter that can be managed independently, we highlight some commonalities in the overall distributions of the aerosols, in the inter-hemispheric surface temperature response and in shifts to the Inter-Tropical Convergence Zone, and also some areas of disagreement, such as the aerosol confinement in the equatorial region and the transport to polar latitudes.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3486 KB)
-
Supplement
(203 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3486 KB) - Metadata XML
-
Supplement
(203 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-401', Anonymous Referee #1, 18 Jul 2022
Review of Visioni et al. Climate response to off-equatorial stratospheric sulfur injections in three Earth System Models – Part 1: experimental protocols and surface changes
General comments
In this study, the authors investigate how stratospheric aerosol intervention (SAI) using SO2 injections at different latitudes affects the aerosol distribution, aerosol optical depth, and surface climate (temperature and precipitation) in three different Earth System Models. The authors find differences between the models and also between different aerosol setups in the same model. The authors then describe the development of feedback algorithms to be used in future simulations to manage injections of SO2 to meet temperature targets as the runs proceed. In general, the paper is clear and easy to read, and the analysis is logical. The results will be of interest to the geoengineering community and the paper is well-suited to ACP. I have two main comments that I suggest the authors consider before publication:
Given the focus on aerosol microphysics driving differences between these results, can the authors highlight how the aerosol schemes differ between the models, not just in terms of the modal properties (Table 1), but how the aerosol processes are treated? It’s mentioned on L145 that condensation is treated differently in GISS, but how? Given the differences found for effective radius, it would be useful to show (perhaps in the SI) some of the other aerosol metrics such as SO2 conversion (highlighted on L419 as an important discrepancy), the nucleation, condensation and coagulation rates, and fluxes between the modes, and explain how these parameterizations differ between the models. Can we learn more here about these uncertainties compared to multi-model volcanic eruption studies that have already shown that differences in AOD are due to different microphysical schemes? What specific areas of improvement have been found in this work as stated in the conclusions at L395?
Section 5 - without going back to previous references (such as Kravitz et al., 2017), this section was hard to follow, especially for someone not familiar with such feedback algorithms. It would be useful if the mathematical relationships were described further in the text and that all letters and symbols were defined and listed immediately after the equations – for example, q and equations for T0 – T2. It would also be helpful if the section was more explicit with signposting to the relevant subplot or line on Figure 10 – e.g., expanding L374 to ‘pattern ‘of AOD’ similar to the target (black dashed lines)’, or similar. It was also not clear to me how the feedback algorithms are different/similar to previous work and what the implications are.
Specific comments
Abstract: an extra sentence at the end summarizing the overall implications of this work would be useful. It was also not clear whether all models included modal aerosol microphysics schemes. I would suggest introducing the 4 model setups at the start
L30-L35: A few more relevant references could be added here e.g., Zanchettin et al. (2016; 2022).
L82: It was unclear at this stage what these targets are
L121: What are the differences in the aerosol scheme?
L149: Please clarify what you mean here
L181: Why AOD and not SAOD? Please also describe the overall evolution of this figure – i.e., the ~2 years of adjustment and therefore why the last 7 years are used in subsequent averages
L235: How is the lifetime defined? Please remove ‘obviously’
L253: Is this wet or dry radius? Fig. SX --> Fig. S1.
L263: What are the dynamical differences? Do the authors have an explanation for the stronger poleward transport in CESM given also the differences in particle size? Why is the transport in the 15S case more similar to GISS modal?
L279: The initial results shown between the global mean AOD and global mean temperature in Figure 1 could also be discussed here.
L289 – L305: I found this hard to follow. Has the sensitivity to aerosols in GISS been increased or not?
L310-311: Unclear exactly what you mean here
Figure 7: What’s causing the different response for CESM2 and UKESM for 30N compared to the other injection locations?
L314: I would suggest moving the overall description of the precipitation changes from the second paragraph to here as it is a long time before the results are described. What about the global percentage changes shown in Figure 8?
L336: shown on left hand side of Figure 8?
L338: There are several newer studies on the impact of eruptions on the ITCZ that could be cited here. Please see Marshall et al. (2022) for some examples.
Figure 10 caption: Please explain what L0, L1 and L2 are and label the black dashed lines.
L346: I think it would be helpful to state what these are here, as is done in the conclusions
L399: This paragraph focuses on the methods, but what are the actual results? How do the results differ depending on the injection location?
L447: This last sentence is difficult to follow
Technical corrections
L3: occurs
L23: please add numbers for the three items in this list
L69: only --> one
L71: a --> the
L84: impacts
L86: in --> with?
L120: eruptions
L208: shows
L209: standard deviations
L211: check commas
Figures: please check all x and y labels are present (missing from 2, 6 and 8) and remove red/green line combinations
L312: 2020 --> 2021
L318: clouds --> cloud
L320: is --> are
Figure 8 caption: five --> seven
L441: insert ‘than’
L444: seems
References
Marshall, L.R., Maters, E.C., Schmidt, A. et al. Volcanic effects on climate: recent advances and future avenues. Bull Volcanol 84, 54 (2022). https://doi.org/10.1007/s00445-022-01559-3
Zanchettin, D., Khodri, M., Timmreck, C., Toohey, M., et al. The Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP): experimental design and forcing input data for CMIP6, Geosci. Model Dev., 9, 2701–2719, https://doi.org/10.5194/gmd-9-2701-2016, 2016.
Zanchettin, D., Timmreck, C., Khodri, M., Schmidt, A., et al. Effects of forcing differences and initial conditions on inter-model agreement in the VolMIP volc-pinatubo-full experiment, Geosci. Model Dev., 15, 2265–2292, https://doi.org/10.5194/gmd-15-2265-2022, 2022.
Citation: https://doi.org/10.5194/egusphere-2022-401-RC1 - AC1: 'Reply on RC1', Daniele Visioni, 14 Sep 2022
-
RC2: 'Comment on egusphere-2022-401', Anonymous Referee #2, 02 Aug 2022
The authors compare the output of geoengineering simulations performed with three Earth’s system models (1 ran with two different aerosol schemes) to determine the difference in AOD, temperature, and precipitation response produced using the same injection of SO2. The authors provide an exhaustive comparison of these quantities (especially AOD and temperature, less so precipitation) and attempt to provide a hypothesis about the reasons for discrepancies.
Generally, I have found this article clear, with a good choice of figures, but the complete lack of observations limits its impact. Of course, I am aware that there are no observations of geoengineering, but variables to evaluate, for instance, the isolation of the tropical stratosphere or the background (non-SAI) AOD can be evaluated against observations. Introducing observations would allow understanding which model has a more reliable representation of transport and dynamics, as well as of background aerosol and sensitivity to changes. I understand that this evaluation against observations is not the focus of this paper, but it has been probably (hopefully?) done in other articles and the main findings could be reported here. Otherwise, the main message of this paper is “the models differ”, which is for sure correct but not particularly telling unless we can understand whether all of these models produce equally possible outcomes or if one is less reliable than the others.
Secondarily, I am not sure if the OMA experiment has been set up correctly. I couldn’t find anywhere how the aerosol radius was chosen. Is it the usual radius used for tropospheric aerosol? It seems like most of the differences between OMA and MATRIX result from a much smaller aerosol radius than the other models. The authors should have first run an experiment with MATRIX, calculated the resulting effective radius, and set up OMA to have that effective radius. As it is I am not sure about the significance of the OMA experiment.
Specific comments
Section 2.1-to 2.2: I suggest harmonizing the three model descriptions. CESM2 has comprehensive stratospheric chemistry and simplified tropospheric chemistry, what about GISS and UKESM? GISS only mentioned heterogeneous chemistry, UKESM doesn’t mention anything at all. I would at least mention if UKCA is bulk, modal, or sectional and if it’s coupled to the chemistry. I know they are described better below but all three descriptions should have the same format.
Line 145: “. Condensational growth leading to a transfer between Aitken and Accumulation modes is also treated differently than in the other two models” differently how?
Table 1: I imagine that the GISS bulk model also assumes a size distribution, for instance, to calculate the optical properties and that the 0.3 um is the modal radius of the fixed size distribution. Is that the case for OMA (If so, a standard deviation must be specified for the prescribed mode) or does OMA really prescribes that all particles are 0.3 um? Also, I would add the aerosol effective radius that is simulated by the three models with microphysics. Lastly, how many ensemble members have been performed? I don’t think I have found it anywhere.
Line 149: I’d specify the diameter here rather than the radius, to avoid confusion with the table where the diameter is specified.
L165: I am not sure I understand why choosing 22km over 25 km would make it easier to inject in one grid box. Also, it is not clear what “same grid box” refers to. Same across models (I suspect it’s not because they have different layers)? Same in time? I am confused by this paragraph.
L 188: I do not understand the goal of the second half of this paragraph, starting from Line 186. Is the point to say that the authors don’t care about the fact that the same injection leads to very different AOD? I don’t agree with including a sentence like this since it is a pretty fundamental conversion that models should agree on. Rather than this, an attempt should be made to explain why here is a difference. Is it because the SO4 removal is less efficient (maybe the particles are smaller in GISS bulk than with explicit microphysics) or because of the different aerosol optical properties due to the different sizes? Is it possible to include the effective radii calculated in all models, to see how they compare with each other and with GISS bulk, as well as the SO4 burden? This is partly answered in Fig. 4, and it would be good to mention it here.
L 210: I imagined the models must have been compared to observations at some point. It would be helpful here to give a description of how each model compared to observations with respect to basic stratospheric circulation: for instance, is UKESM known to have a too isolated tropical pipe or to strong vertical transport in the tropical stratosphere? What about interannual variability: are the simulated variabilities similar to the observed ones (I mean in control simulations that must have been performed in the past).
Fig. 3 needs improvement. The labels and ticks of the color scale are illegible. Since the same color scale is applied to all panels, I suggest using one larger color bar at the bottom of the figure, and also enlarging the fonts on the axis.
Line 237: I think it’s panel 4h, not 4g.
L240: as I mention above, I suspect OMA assumes a lognormal distribution with a modal radius of 0.15 micron. If that’s the case, the effective radius can be calculated for OMA using relationships between modal and effective radius in lognormal distributions (I think it’s in Seinfeld and Pandis, but in any case is also included in Aquila et al. 2012). If that’s the case, I suggest adding the effective radius for OMA for comparison.
L253: number of the supplementary figure is missing
L257 radius _IN_ GISS model
L272: there are three “for instance” in three lines.
L286: how many models were included in the multi-model average of GeoMIP G6?
L297 and following: the discussion about tuning is quite vague and can be made more precise by looking into the model setup and seeing which tuning parameters have been changed to keep remedy the low background AOD. Also, I am not sure I understand the reasoning; the background (non-SAI) AOD can be verified against observations, and comparing against observations could tell us whether 0.03 or 0.11 is more reasonable. If 0.03 is too low (compared to observations) the most obvious “fix” to me seems like increasing emissions, or decreasing the radius, rather than changing the temperature sensitivity to aerosols. Also, which tuning parameter would affect the temperature sensitivity to aerosols specifically?
Fig. 6: the letters identifying the panels are missing
L343: one “at” too many
L353: what is the difference between (l0, l1, l2) and (L0, L1, L2)? Generally, I find this explanation a bit confusing. It’s pretty clear in Kravitz et al. (2016). I would either make it longer and more explicit, or shorter and more qualitative with an explicit reference to go look in Kravitz et al. (2016). It is a bit difficult to keep in mind the physical meaning of what the text explains. I have also found this section quite disconnected from the previous ones in terms of style and clarity, at the point that it could be moved to a different paper where it would be easier to expand on the meaning of the results.
Citation: https://doi.org/10.5194/egusphere-2022-401-RC2 - AC2: 'Reply on RC2', Daniele Visioni, 14 Sep 2022
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-401', Anonymous Referee #1, 18 Jul 2022
Review of Visioni et al. Climate response to off-equatorial stratospheric sulfur injections in three Earth System Models – Part 1: experimental protocols and surface changes
General comments
In this study, the authors investigate how stratospheric aerosol intervention (SAI) using SO2 injections at different latitudes affects the aerosol distribution, aerosol optical depth, and surface climate (temperature and precipitation) in three different Earth System Models. The authors find differences between the models and also between different aerosol setups in the same model. The authors then describe the development of feedback algorithms to be used in future simulations to manage injections of SO2 to meet temperature targets as the runs proceed. In general, the paper is clear and easy to read, and the analysis is logical. The results will be of interest to the geoengineering community and the paper is well-suited to ACP. I have two main comments that I suggest the authors consider before publication:
Given the focus on aerosol microphysics driving differences between these results, can the authors highlight how the aerosol schemes differ between the models, not just in terms of the modal properties (Table 1), but how the aerosol processes are treated? It’s mentioned on L145 that condensation is treated differently in GISS, but how? Given the differences found for effective radius, it would be useful to show (perhaps in the SI) some of the other aerosol metrics such as SO2 conversion (highlighted on L419 as an important discrepancy), the nucleation, condensation and coagulation rates, and fluxes between the modes, and explain how these parameterizations differ between the models. Can we learn more here about these uncertainties compared to multi-model volcanic eruption studies that have already shown that differences in AOD are due to different microphysical schemes? What specific areas of improvement have been found in this work as stated in the conclusions at L395?
Section 5 - without going back to previous references (such as Kravitz et al., 2017), this section was hard to follow, especially for someone not familiar with such feedback algorithms. It would be useful if the mathematical relationships were described further in the text and that all letters and symbols were defined and listed immediately after the equations – for example, q and equations for T0 – T2. It would also be helpful if the section was more explicit with signposting to the relevant subplot or line on Figure 10 – e.g., expanding L374 to ‘pattern ‘of AOD’ similar to the target (black dashed lines)’, or similar. It was also not clear to me how the feedback algorithms are different/similar to previous work and what the implications are.
Specific comments
Abstract: an extra sentence at the end summarizing the overall implications of this work would be useful. It was also not clear whether all models included modal aerosol microphysics schemes. I would suggest introducing the 4 model setups at the start
L30-L35: A few more relevant references could be added here e.g., Zanchettin et al. (2016; 2022).
L82: It was unclear at this stage what these targets are
L121: What are the differences in the aerosol scheme?
L149: Please clarify what you mean here
L181: Why AOD and not SAOD? Please also describe the overall evolution of this figure – i.e., the ~2 years of adjustment and therefore why the last 7 years are used in subsequent averages
L235: How is the lifetime defined? Please remove ‘obviously’
L253: Is this wet or dry radius? Fig. SX --> Fig. S1.
L263: What are the dynamical differences? Do the authors have an explanation for the stronger poleward transport in CESM given also the differences in particle size? Why is the transport in the 15S case more similar to GISS modal?
L279: The initial results shown between the global mean AOD and global mean temperature in Figure 1 could also be discussed here.
L289 – L305: I found this hard to follow. Has the sensitivity to aerosols in GISS been increased or not?
L310-311: Unclear exactly what you mean here
Figure 7: What’s causing the different response for CESM2 and UKESM for 30N compared to the other injection locations?
L314: I would suggest moving the overall description of the precipitation changes from the second paragraph to here as it is a long time before the results are described. What about the global percentage changes shown in Figure 8?
L336: shown on left hand side of Figure 8?
L338: There are several newer studies on the impact of eruptions on the ITCZ that could be cited here. Please see Marshall et al. (2022) for some examples.
Figure 10 caption: Please explain what L0, L1 and L2 are and label the black dashed lines.
L346: I think it would be helpful to state what these are here, as is done in the conclusions
L399: This paragraph focuses on the methods, but what are the actual results? How do the results differ depending on the injection location?
L447: This last sentence is difficult to follow
Technical corrections
L3: occurs
L23: please add numbers for the three items in this list
L69: only --> one
L71: a --> the
L84: impacts
L86: in --> with?
L120: eruptions
L208: shows
L209: standard deviations
L211: check commas
Figures: please check all x and y labels are present (missing from 2, 6 and 8) and remove red/green line combinations
L312: 2020 --> 2021
L318: clouds --> cloud
L320: is --> are
Figure 8 caption: five --> seven
L441: insert ‘than’
L444: seems
References
Marshall, L.R., Maters, E.C., Schmidt, A. et al. Volcanic effects on climate: recent advances and future avenues. Bull Volcanol 84, 54 (2022). https://doi.org/10.1007/s00445-022-01559-3
Zanchettin, D., Khodri, M., Timmreck, C., Toohey, M., et al. The Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP): experimental design and forcing input data for CMIP6, Geosci. Model Dev., 9, 2701–2719, https://doi.org/10.5194/gmd-9-2701-2016, 2016.
Zanchettin, D., Timmreck, C., Khodri, M., Schmidt, A., et al. Effects of forcing differences and initial conditions on inter-model agreement in the VolMIP volc-pinatubo-full experiment, Geosci. Model Dev., 15, 2265–2292, https://doi.org/10.5194/gmd-15-2265-2022, 2022.
Citation: https://doi.org/10.5194/egusphere-2022-401-RC1 - AC1: 'Reply on RC1', Daniele Visioni, 14 Sep 2022
-
RC2: 'Comment on egusphere-2022-401', Anonymous Referee #2, 02 Aug 2022
The authors compare the output of geoengineering simulations performed with three Earth’s system models (1 ran with two different aerosol schemes) to determine the difference in AOD, temperature, and precipitation response produced using the same injection of SO2. The authors provide an exhaustive comparison of these quantities (especially AOD and temperature, less so precipitation) and attempt to provide a hypothesis about the reasons for discrepancies.
Generally, I have found this article clear, with a good choice of figures, but the complete lack of observations limits its impact. Of course, I am aware that there are no observations of geoengineering, but variables to evaluate, for instance, the isolation of the tropical stratosphere or the background (non-SAI) AOD can be evaluated against observations. Introducing observations would allow understanding which model has a more reliable representation of transport and dynamics, as well as of background aerosol and sensitivity to changes. I understand that this evaluation against observations is not the focus of this paper, but it has been probably (hopefully?) done in other articles and the main findings could be reported here. Otherwise, the main message of this paper is “the models differ”, which is for sure correct but not particularly telling unless we can understand whether all of these models produce equally possible outcomes or if one is less reliable than the others.
Secondarily, I am not sure if the OMA experiment has been set up correctly. I couldn’t find anywhere how the aerosol radius was chosen. Is it the usual radius used for tropospheric aerosol? It seems like most of the differences between OMA and MATRIX result from a much smaller aerosol radius than the other models. The authors should have first run an experiment with MATRIX, calculated the resulting effective radius, and set up OMA to have that effective radius. As it is I am not sure about the significance of the OMA experiment.
Specific comments
Section 2.1-to 2.2: I suggest harmonizing the three model descriptions. CESM2 has comprehensive stratospheric chemistry and simplified tropospheric chemistry, what about GISS and UKESM? GISS only mentioned heterogeneous chemistry, UKESM doesn’t mention anything at all. I would at least mention if UKCA is bulk, modal, or sectional and if it’s coupled to the chemistry. I know they are described better below but all three descriptions should have the same format.
Line 145: “. Condensational growth leading to a transfer between Aitken and Accumulation modes is also treated differently than in the other two models” differently how?
Table 1: I imagine that the GISS bulk model also assumes a size distribution, for instance, to calculate the optical properties and that the 0.3 um is the modal radius of the fixed size distribution. Is that the case for OMA (If so, a standard deviation must be specified for the prescribed mode) or does OMA really prescribes that all particles are 0.3 um? Also, I would add the aerosol effective radius that is simulated by the three models with microphysics. Lastly, how many ensemble members have been performed? I don’t think I have found it anywhere.
Line 149: I’d specify the diameter here rather than the radius, to avoid confusion with the table where the diameter is specified.
L165: I am not sure I understand why choosing 22km over 25 km would make it easier to inject in one grid box. Also, it is not clear what “same grid box” refers to. Same across models (I suspect it’s not because they have different layers)? Same in time? I am confused by this paragraph.
L 188: I do not understand the goal of the second half of this paragraph, starting from Line 186. Is the point to say that the authors don’t care about the fact that the same injection leads to very different AOD? I don’t agree with including a sentence like this since it is a pretty fundamental conversion that models should agree on. Rather than this, an attempt should be made to explain why here is a difference. Is it because the SO4 removal is less efficient (maybe the particles are smaller in GISS bulk than with explicit microphysics) or because of the different aerosol optical properties due to the different sizes? Is it possible to include the effective radii calculated in all models, to see how they compare with each other and with GISS bulk, as well as the SO4 burden? This is partly answered in Fig. 4, and it would be good to mention it here.
L 210: I imagined the models must have been compared to observations at some point. It would be helpful here to give a description of how each model compared to observations with respect to basic stratospheric circulation: for instance, is UKESM known to have a too isolated tropical pipe or to strong vertical transport in the tropical stratosphere? What about interannual variability: are the simulated variabilities similar to the observed ones (I mean in control simulations that must have been performed in the past).
Fig. 3 needs improvement. The labels and ticks of the color scale are illegible. Since the same color scale is applied to all panels, I suggest using one larger color bar at the bottom of the figure, and also enlarging the fonts on the axis.
Line 237: I think it’s panel 4h, not 4g.
L240: as I mention above, I suspect OMA assumes a lognormal distribution with a modal radius of 0.15 micron. If that’s the case, the effective radius can be calculated for OMA using relationships between modal and effective radius in lognormal distributions (I think it’s in Seinfeld and Pandis, but in any case is also included in Aquila et al. 2012). If that’s the case, I suggest adding the effective radius for OMA for comparison.
L253: number of the supplementary figure is missing
L257 radius _IN_ GISS model
L272: there are three “for instance” in three lines.
L286: how many models were included in the multi-model average of GeoMIP G6?
L297 and following: the discussion about tuning is quite vague and can be made more precise by looking into the model setup and seeing which tuning parameters have been changed to keep remedy the low background AOD. Also, I am not sure I understand the reasoning; the background (non-SAI) AOD can be verified against observations, and comparing against observations could tell us whether 0.03 or 0.11 is more reasonable. If 0.03 is too low (compared to observations) the most obvious “fix” to me seems like increasing emissions, or decreasing the radius, rather than changing the temperature sensitivity to aerosols. Also, which tuning parameter would affect the temperature sensitivity to aerosols specifically?
Fig. 6: the letters identifying the panels are missing
L343: one “at” too many
L353: what is the difference between (l0, l1, l2) and (L0, L1, L2)? Generally, I find this explanation a bit confusing. It’s pretty clear in Kravitz et al. (2016). I would either make it longer and more explicit, or shorter and more qualitative with an explicit reference to go look in Kravitz et al. (2016). It is a bit difficult to keep in mind the physical meaning of what the text explains. I have also found this section quite disconnected from the previous ones in terms of style and clarity, at the point that it could be moved to a different paper where it would be easier to expand on the meaning of the results.
Citation: https://doi.org/10.5194/egusphere-2022-401-RC2 - AC2: 'Reply on RC2', Daniele Visioni, 14 Sep 2022
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
397 | 207 | 14 | 618 | 76 | 6 | 3 |
- HTML: 397
- PDF: 207
- XML: 14
- Total: 618
- Supplement: 76
- BibTeX: 6
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Daniele Visioni
Ewa M. Bednarz
Walker R. Lee
Ben Kravitz
Andy Jones
Jim M. Haywood
Douglas G. MacMartin
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3486 KB) - Metadata XML
-
Supplement
(203 KB) - BibTeX
- EndNote
- Final revised paper