the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Scenario and Model Dependence of Strategic Solar Climate Intervention in CESM
Abstract. Model dependence in simulated responses to stratospheric aerosol injection (SAI) is a major uncertainty surrounding the potential implementation of this solar climate intervention strategy. We identify large differences in the aerosol mass latitudinal distributions between two recently produced climate model SAI large ensembles, despite using similar climate targets and controller algorithms, with the goal of understanding the drivers of such differences. Using a hierarchy of recently produced simulations, we identify three main contributors including: 1) the rapid adjustment of clouds and rainfall to elevated levels of carbon dioxide, 2) the associated low-frequency dynamical responses in the Atlantic Meridional Overturning Circulation, and 3) the contrasts in future climate forcing scenarios. Each uncertainty is unlikely to be significantly narrowed over the likely timeframe of a potential SAI deployment if a 1.5 C target is to be met. The results thus suggest the need for significant flexibility in climate intervention deployment to account for these large uncertainties in the climate system response.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(10823 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(10823 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-779', Alan Robock, 07 Sep 2022
This is an excellent paper, and I recommend it be accepted after addressing all the 86 comments in the annotated manuscript, most of which have to do with how the information is communicated. The main point, that rapid adjustments in clouds and precipitation can produce different spatial patterns of response, and that these are connected to larger scale responses, such as in AMOC, is important, and this happens in two versions of the same model. So I recommend that this general result be emphasized and “CESM” be removed from the title.
Also, I think the aim of reducing risks so that SAI can be implemented in the last sentence of the paper should be deleted. This is not a result of the work here, and not supported by any analysis. Should you rather say that we should work to emphasize the risks so that SAI is never implemented? I think it would be better to just say that we want to characterize the risks and benefits so that any future decisions to implement SAI will be informed decisions.
The formatting of degree symbols and subscripts, such as for CO2, is not working, although superscripts for W mâ2 are working.
Also, it seems that undefined strange codes (e.g., FSNT) are used for variables, but never explained. It seems like NCAR inside baseball, where people who use CESM all the time memorize these codes, but they are hard to understand for others. And the direction of the fluxes are not specified. Is downward TOA radiation positive or negative? Is this true for shortwave and longwave? What about net?
I recommend that the insignificant regions on the maps in the figures be stippled, not the significant ones. As is, all the important information is covered by stipples, but the part we should not focus on is not covered.
In several places, the authors use “it is notable,” but everything in the paper should be notable or it should not be there. These should be deleted to save space.
Review by Alan Robock
-
AC1: 'Reply on RC1', John Fasullo, 09 Nov 2022
We would like to thank Alan for the time he spent on the manuscript and his insightful and quite helpful comments. A detailed list of changes to address his feedback is included below. Other minor corrects have also been made, such as in the references list.
To address his comments:
We have reformatted all subscripts and superscript degree symbols.
We’ve removed CESM from the title as suggested.
We’ve changed the FLNT and FSNT variable acronyms to be more intuitive.
We’ve clarified instances of ambiguous flux directionality.
We’ve removed the language regarding implementation of SAI as suggested.
We’ve revised stippling in all figures as suggested.
Filler language has been deleted as suggested.
While we acknowledge the point made about SAI being a bit of a misnomer, we’ve decided to retain SAI as defined as this definition is conventional usage, such as in the 2021 National Academies’ report.
Line 14: The 1.5C target is described further.
Line 19: Changed SCI to intervention
Line 64: Spelling of sulphate is changed.
Line 73: NH is now capitalized.
Line 75: Text inserted stating same WACCM resolutions
Line 79: Punctuation fixed.
Line 123: “Doesn’t” is changed to “does not”
Line 128: Removed “moderate” scenario…
Line 140: Extra text removed
Line 177: NH acronym is now used as previously defined
Line 280: “Will be” replace by “are”
Line 329: Text changed to “two versions of the same climate model”
Table 1: edited per suggestions
Note that for the AMOC analysis the units are retained in the EOF (See new figure in the Appendix). We’ve expanded the associated discussion.
Citation: https://doi.org/10.5194/egusphere-2022-779-AC1
-
AC1: 'Reply on RC1', John Fasullo, 09 Nov 2022
-
RC2: 'Comment on egusphere-2022-779', Douglas MacMartin, 14 Sep 2022
Overall this is an important paper that makes a valuable contribution and should ultimately be published, though there are some things in the presentation that could be improved.
A particular example (and this is redundant with my comments below) is the omission of much on the targets used in the simulation design for both SAI datasets considered, the fact that those targets are simply choices, and being clear that the reason that the uncertainties discussed herein manifest themselves as a big difference in the distribution across latitudes of injection rates (and corresponding aerosol distribution) rather than as a difference in the eventual interhemispheric temperature gradient T1 is a result of the choice of those targets. (Indeed this paper would likely have been easier to write if the simulations had been conducted with only a T0 target!) Doesn’t really affect the paper, but it would be confusing as written to someone who didn’t know the datasets well already.
Also, while it again doesn’t ultimately affect any conclusions from the paper, the eof-based metric for evaluating AMOC strength is more complicated to interpret than implied herein since a shift in the latitude of overturning circulation, for example, will manifest as a reduction in the magnitude of the principal component… without more work it’s hard to disambiguate true reductions in the strength of the overturning (e.g., looking at maximum of streamfunction) from where that maximum occurs.
Finally, it might be worth noting that the original simulation in Kravitz et al 2017 that pioneered the strategy used in both GLENS and ARISE yielded a different hemispheric asymmetry in the injection rates than GLENS, despite both being in CESM1(WACCM); the only difference being a change in the land model from CLM4 to CLM4.5. Neither the cloud adjustment to CO2 nor the tropospheric aerosol changes can be responsible for that. I don’t recall having looked at AMOC in that run, but if it’s still around, that might be worth looking at… at least worth acknowledging. Might be something in fast response to CO2 again, but for vegetation in the land model… that is, there may be other factors beyond the 3 identified here that are also relevant when looking at different models
- There’ve been a few recent papers on scenarios for SRM, and NCAR is organizing a workshop on the topic soon; I think the scope of what’s envisioned on SRM scenarios is so much broader than the scope in this paper that including that word in the title is potentially misleading… yes, scenario is relevant to what you’re doing, but you certainly aren’t remotely covering the scope of scenario dependence. The title is not wrong, just may not be what people think of when they read the words.
- Abstract line 1, I don’t think “model dependence” is itself an uncertainty, but rather demonstrates the presence of uncertainty. (Nitpicky, perhaps…)
- L12, is the AMOC behavior “associated” with the rapid adjustment? (If not, that’s the wrong word). I gather from the later discussion that that is a hypothesis for why AMOC responds as it does, but not clear that that is proven.
- L14, shouldn’t be over the timeframe of deployment, but of research prior to deployment. (And the statement is true regardless of whether the target of SAI is to stay below 1.5C or not, so should reword anyway)
- L15 (and elsewhere), the adjectives “significant” in front of flexibility, and “large” in front of uncertainties, don’t actually convey any information. What makes an uncertainty “large” or “small”? It would seem to me that the useful sense of that word ought to be whether some resolutions of the uncertainty would result in a choice to deploy and other resolutions wouldn’t. If any possible value of the uncertainty doesn’t change whether or not someone would choose to deploy, is it meaningful to call it “large”? And, given that that is hard to prove (and I don’t think proven here), not clear to me that there’s any basis for using these adjectives. Ditto L25, for example. (And one could level the same criticism at the IPCC report too, where qualifiers are better defined for climate change, but used arbitrarily for SRM.) Given that these words convey no actual information, but they do convey emotion, I would argue that such qualifiers don’t belong in a scientific journal article (though I’m aware that this is a generic problem with many papers throwing words around without thinking through what those words do or don’t mean).
- L37… seems like some rewording is in order here, in that all 3 of the mechanisms that are identified here aren’t actually specific to SAI, so they’d lead to uncertainty in the response to solar reduction too. (And if there was no control over the interhemispheric gradient, then the uncertainty would lead to uncertainty in that gradient, rather than uncertainty in where to inject to compensate it – a point that should be made much more clearly somewhere in the paper.)
- Note that a lot of the degree symbols didn’t wind up correct in the pdf.
- L124-125, I agree with the plausibility statement, though strictly speaking this should come with some reference to support the assertion.
- Section 3 might benefit from some subsections (on CO2 fast response, AMOC, and tropospheric aerosols)
- First paragraph of Section 3… Before this (probably somewhere in section 2) it would be critical to better describe the goals of the strategy used in GLENS and ARISE that is currently only briefly noted in passing lines 84-85, because otherwise a reader not intimately familiar with these datasets would not understand why there is a difference in the aerosol injection rates across latitudes. Really only takes an extra sentence to reiterate that the injection rates are adjusted to maintain not only global mean temperature but also interhemispheric and equator-to-pole, and that the algorithm determines the distribution of injection across the 4 latitudes that is needed to compensate all 3 metrics, with the balance between NH and SH injection based only on the desire to balance interhemispheric temperatures, and the balance between 15 and 30 degree injection based on the desire to balance equator to pole gradient. (One could further point out that as the radiative forcing from co2 is hemispherically symmetric, to first order one might expect a symmetric injection strategy to optimally compensate for the co2 forcing, even with a goal of managing interhemispheric balance.) This comment on the needed forcing seems essential context prior to the current first paragraph of section 3.
- In that first paragraph, one could also point out that the very first simulation of the control strategy used in GLENS and ARISE, from Kravitz et al 2017, resulted in a nearly hemispherically symmetric injection profile. (The sole difference between that and GLENS being the switch from CLM4 to CLM4.5…) Or maybe this is worth explaining elsewhere… do you know why the change in the land model can also change things? Is that also a vegetation-based fast-response to CO2?
- L170, if you’re going to use “FSNT”, then while that makes perfect sense to those of us who use CESM, might be better to state what it is an acronym for, for the rest of the folks.
- L182-3, of critical relevance here, but not stated, is the role of the SAI strategy. If both simulations had been conducted without any intent to balance interhemispheric temperature gradient, then perhaps the correlations would be stronger – that is, it is the fact that the controller used is trying to deliberately compensate for model differences that matters; this will make the intermodel differences in temperature more similar while the intermodel differences in injection rates (and hence FSNT) will be less similar. (This is similar to my comment above in that you’re glossing over the relevance of the injection strategy in interpreting the results, I don’t think you can do that.)
- L200-202, yes, but (worth pointing out somewhere) that to first order, hemispheric asymmetry in the slow response doesn’t matter… that is, if there is hemispheric asymmetry in the *response* to a symmetric CO2 RF, one would expect to also have a counteracting hemispheric asymmetry in the response to a symmetric AOD… see next comment too.
- L205-206 yes, but with a qualifier… that being that *some* of the uncertainty in how the climate responds to CO2 (climate feedbacks that determine the slow response) is actually reduced in the CO2+SAI case, because the same temperature-dependent climate feedbacks operate in response to both forcings (by definition; see e.g. MacMartin, Kravitz and Rasch, 2015 in GRL, which one of the reviewers claimed was too obvious to publish). g., since the purely radiative forcing from CO2 (prior to thinking about cloud adjustments and their resulting RF) is roughly understood (roughly uniform spatially and seasonally), then a hemispherically symmetric AOD would to first order compensate for that, regardless of how hemispherically asymmetric the climate system response to that forcing might be. So this seems a bit too simplistic – the critical observation from your analysis here is that the SAI needs to compensate for the radiative results from the fast-response to CO2 as well…
- L220, important to stress what you’re comparing to. Of course, even in ARISE-SAI-1.5, AMOC is stronger than in SSP2-4.5, it’s just weaker than the reference period… though looking at your figure, if I guess on the missing information not given there, this may be dependent on the metric one chooses to evaluate AMOC strength (at an absolute minimum that needs to be acknowledged). In discussing AMOC response and comparison between GLENS and ARISE, important to stress that in both cases the presence of SAI strengthens AMOC relative to no SAI, but in the GLENS case it is overcompensated (relative to change in global mean temperature) while in ARISE it is undercompensated (relative to that). I don’t think this comes across well… comes across as a fundamental difference in sign which is simply not true – it’s more a question of degree of compensation by SAI. (That is at least true for AMOC metric focused on strength alone; see e.g. Figure 3 in https://www.pnas.org/doi/10.1073/pnas.2202230119; that’s the middle-atmosphere version rather than TSMLT, but the plots are nearly identical for ARISE). The metric considered here that risks confounding changes in pattern with changes in strength… if the conclusion that ARISE strengthens AMOC relative to SSP245 isn’t true for the eof-based method you use here, once you include the relevant baseline case for comparison, then that calls into question how to interpret the eof-based metric.)
- L261, the latitude of SAI injections only depends on it if one wants it to depend on it… of course, there are good reasons to want it to depend, but the current wording is too concise. (Again, one could simply fix the latitudes of injection, set the NH and SH injection rates to be the same, choose them to balance T0 only, and then instead of the uncertainty being in the injection rates, it would manifest as uncertainty in the resulting shift in T1 under SAI…)
- L264, wouldn’t it be fairer to say two versions of the same climate model… the similarities between CESM1 and CESM2 are much more than between them and some other modeling center model.
- L271, the sentence is about SAI but then switches to SCI. Everything in this paper would apply equally well to MCB, but this sentence as written shouldn’t switch.
- L271, the last bit of this sentence is wrong. SAI is *already* a “promising” risk-mitigation measure; “promising” generally suggests that you don’t need to reduce all of the uncertainties
- Fig 2, units on panel f are correctly shown in the figure as “K” and wrong in the legend. (IMO you could get rid of the units on all the subpanels and just state in the caption) BUT, panel f should also be scaled by the amount of cooling offset, otherwise it will greatly overemphasize the residual in GLENS-SAI relative to residuals in ARISE. Ditto Figure 3f. (Actually, now I’m not sure how to interpret panel f at all… I was thinking that if you subtracted the respective reference time-period from each simulation, and then normalized by the amount of cooling, then that would tell you something about the pattern of the residual GHG+SAI in each case… but did you just take some time period and subtract the two, despite the different background emissions and different temperature targets? How is one supposed to interpret that?)
- Fig 5a, should also include the line for CESM2-WACCM-SSP2-4.5, for context, either in addition to or instead of CESM2-WACCM6-SSP585. Having said that, this figure confuses me in multiple ways. I presume the units are Sv? Is the eof calculated once, or is it a different calculation for each model? Each simulation? Each decade? If you keep the same eof pattern, how do you disentangle a change caused by a shift in the strength of the circulation from a shift in the latitude of the peak? I know, for example, because I’ve plotted it, that if you use the maximum of the streamfunction as your AMOC metric, then you will clearly show that ARISE-SAI *recovers* AMOC strength relative to SSP2-4.5 (but not back to 2030 levels), so if that does not hold for your choice of metric, then that point is pretty relevant to point out to the reader – that the conclusions on AMOC depend on what metric you happen to use to calculate it. (I’m guessing that’s true as I’d expect SSP2-4.5 to not be worse than SSP5-8.5.)
- Fig 5 panels b-e, change relative to what? Relative to their respective reference periods, or relative to unmitigated at the same time period?
- Panel 7f isn’t particularly meaningful, given that it isn’t scaled in any way… what’s the message?
Citation: https://doi.org/10.5194/egusphere-2022-779-RC2 - AC2: 'Reply on RC2', John Fasullo, 09 Nov 2022
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-779', Alan Robock, 07 Sep 2022
This is an excellent paper, and I recommend it be accepted after addressing all the 86 comments in the annotated manuscript, most of which have to do with how the information is communicated. The main point, that rapid adjustments in clouds and precipitation can produce different spatial patterns of response, and that these are connected to larger scale responses, such as in AMOC, is important, and this happens in two versions of the same model. So I recommend that this general result be emphasized and “CESM” be removed from the title.
Also, I think the aim of reducing risks so that SAI can be implemented in the last sentence of the paper should be deleted. This is not a result of the work here, and not supported by any analysis. Should you rather say that we should work to emphasize the risks so that SAI is never implemented? I think it would be better to just say that we want to characterize the risks and benefits so that any future decisions to implement SAI will be informed decisions.
The formatting of degree symbols and subscripts, such as for CO2, is not working, although superscripts for W mâ2 are working.
Also, it seems that undefined strange codes (e.g., FSNT) are used for variables, but never explained. It seems like NCAR inside baseball, where people who use CESM all the time memorize these codes, but they are hard to understand for others. And the direction of the fluxes are not specified. Is downward TOA radiation positive or negative? Is this true for shortwave and longwave? What about net?
I recommend that the insignificant regions on the maps in the figures be stippled, not the significant ones. As is, all the important information is covered by stipples, but the part we should not focus on is not covered.
In several places, the authors use “it is notable,” but everything in the paper should be notable or it should not be there. These should be deleted to save space.
Review by Alan Robock
-
AC1: 'Reply on RC1', John Fasullo, 09 Nov 2022
We would like to thank Alan for the time he spent on the manuscript and his insightful and quite helpful comments. A detailed list of changes to address his feedback is included below. Other minor corrects have also been made, such as in the references list.
To address his comments:
We have reformatted all subscripts and superscript degree symbols.
We’ve removed CESM from the title as suggested.
We’ve changed the FLNT and FSNT variable acronyms to be more intuitive.
We’ve clarified instances of ambiguous flux directionality.
We’ve removed the language regarding implementation of SAI as suggested.
We’ve revised stippling in all figures as suggested.
Filler language has been deleted as suggested.
While we acknowledge the point made about SAI being a bit of a misnomer, we’ve decided to retain SAI as defined as this definition is conventional usage, such as in the 2021 National Academies’ report.
Line 14: The 1.5C target is described further.
Line 19: Changed SCI to intervention
Line 64: Spelling of sulphate is changed.
Line 73: NH is now capitalized.
Line 75: Text inserted stating same WACCM resolutions
Line 79: Punctuation fixed.
Line 123: “Doesn’t” is changed to “does not”
Line 128: Removed “moderate” scenario…
Line 140: Extra text removed
Line 177: NH acronym is now used as previously defined
Line 280: “Will be” replace by “are”
Line 329: Text changed to “two versions of the same climate model”
Table 1: edited per suggestions
Note that for the AMOC analysis the units are retained in the EOF (See new figure in the Appendix). We’ve expanded the associated discussion.
Citation: https://doi.org/10.5194/egusphere-2022-779-AC1
-
AC1: 'Reply on RC1', John Fasullo, 09 Nov 2022
-
RC2: 'Comment on egusphere-2022-779', Douglas MacMartin, 14 Sep 2022
Overall this is an important paper that makes a valuable contribution and should ultimately be published, though there are some things in the presentation that could be improved.
A particular example (and this is redundant with my comments below) is the omission of much on the targets used in the simulation design for both SAI datasets considered, the fact that those targets are simply choices, and being clear that the reason that the uncertainties discussed herein manifest themselves as a big difference in the distribution across latitudes of injection rates (and corresponding aerosol distribution) rather than as a difference in the eventual interhemispheric temperature gradient T1 is a result of the choice of those targets. (Indeed this paper would likely have been easier to write if the simulations had been conducted with only a T0 target!) Doesn’t really affect the paper, but it would be confusing as written to someone who didn’t know the datasets well already.
Also, while it again doesn’t ultimately affect any conclusions from the paper, the eof-based metric for evaluating AMOC strength is more complicated to interpret than implied herein since a shift in the latitude of overturning circulation, for example, will manifest as a reduction in the magnitude of the principal component… without more work it’s hard to disambiguate true reductions in the strength of the overturning (e.g., looking at maximum of streamfunction) from where that maximum occurs.
Finally, it might be worth noting that the original simulation in Kravitz et al 2017 that pioneered the strategy used in both GLENS and ARISE yielded a different hemispheric asymmetry in the injection rates than GLENS, despite both being in CESM1(WACCM); the only difference being a change in the land model from CLM4 to CLM4.5. Neither the cloud adjustment to CO2 nor the tropospheric aerosol changes can be responsible for that. I don’t recall having looked at AMOC in that run, but if it’s still around, that might be worth looking at… at least worth acknowledging. Might be something in fast response to CO2 again, but for vegetation in the land model… that is, there may be other factors beyond the 3 identified here that are also relevant when looking at different models
- There’ve been a few recent papers on scenarios for SRM, and NCAR is organizing a workshop on the topic soon; I think the scope of what’s envisioned on SRM scenarios is so much broader than the scope in this paper that including that word in the title is potentially misleading… yes, scenario is relevant to what you’re doing, but you certainly aren’t remotely covering the scope of scenario dependence. The title is not wrong, just may not be what people think of when they read the words.
- Abstract line 1, I don’t think “model dependence” is itself an uncertainty, but rather demonstrates the presence of uncertainty. (Nitpicky, perhaps…)
- L12, is the AMOC behavior “associated” with the rapid adjustment? (If not, that’s the wrong word). I gather from the later discussion that that is a hypothesis for why AMOC responds as it does, but not clear that that is proven.
- L14, shouldn’t be over the timeframe of deployment, but of research prior to deployment. (And the statement is true regardless of whether the target of SAI is to stay below 1.5C or not, so should reword anyway)
- L15 (and elsewhere), the adjectives “significant” in front of flexibility, and “large” in front of uncertainties, don’t actually convey any information. What makes an uncertainty “large” or “small”? It would seem to me that the useful sense of that word ought to be whether some resolutions of the uncertainty would result in a choice to deploy and other resolutions wouldn’t. If any possible value of the uncertainty doesn’t change whether or not someone would choose to deploy, is it meaningful to call it “large”? And, given that that is hard to prove (and I don’t think proven here), not clear to me that there’s any basis for using these adjectives. Ditto L25, for example. (And one could level the same criticism at the IPCC report too, where qualifiers are better defined for climate change, but used arbitrarily for SRM.) Given that these words convey no actual information, but they do convey emotion, I would argue that such qualifiers don’t belong in a scientific journal article (though I’m aware that this is a generic problem with many papers throwing words around without thinking through what those words do or don’t mean).
- L37… seems like some rewording is in order here, in that all 3 of the mechanisms that are identified here aren’t actually specific to SAI, so they’d lead to uncertainty in the response to solar reduction too. (And if there was no control over the interhemispheric gradient, then the uncertainty would lead to uncertainty in that gradient, rather than uncertainty in where to inject to compensate it – a point that should be made much more clearly somewhere in the paper.)
- Note that a lot of the degree symbols didn’t wind up correct in the pdf.
- L124-125, I agree with the plausibility statement, though strictly speaking this should come with some reference to support the assertion.
- Section 3 might benefit from some subsections (on CO2 fast response, AMOC, and tropospheric aerosols)
- First paragraph of Section 3… Before this (probably somewhere in section 2) it would be critical to better describe the goals of the strategy used in GLENS and ARISE that is currently only briefly noted in passing lines 84-85, because otherwise a reader not intimately familiar with these datasets would not understand why there is a difference in the aerosol injection rates across latitudes. Really only takes an extra sentence to reiterate that the injection rates are adjusted to maintain not only global mean temperature but also interhemispheric and equator-to-pole, and that the algorithm determines the distribution of injection across the 4 latitudes that is needed to compensate all 3 metrics, with the balance between NH and SH injection based only on the desire to balance interhemispheric temperatures, and the balance between 15 and 30 degree injection based on the desire to balance equator to pole gradient. (One could further point out that as the radiative forcing from co2 is hemispherically symmetric, to first order one might expect a symmetric injection strategy to optimally compensate for the co2 forcing, even with a goal of managing interhemispheric balance.) This comment on the needed forcing seems essential context prior to the current first paragraph of section 3.
- In that first paragraph, one could also point out that the very first simulation of the control strategy used in GLENS and ARISE, from Kravitz et al 2017, resulted in a nearly hemispherically symmetric injection profile. (The sole difference between that and GLENS being the switch from CLM4 to CLM4.5…) Or maybe this is worth explaining elsewhere… do you know why the change in the land model can also change things? Is that also a vegetation-based fast-response to CO2?
- L170, if you’re going to use “FSNT”, then while that makes perfect sense to those of us who use CESM, might be better to state what it is an acronym for, for the rest of the folks.
- L182-3, of critical relevance here, but not stated, is the role of the SAI strategy. If both simulations had been conducted without any intent to balance interhemispheric temperature gradient, then perhaps the correlations would be stronger – that is, it is the fact that the controller used is trying to deliberately compensate for model differences that matters; this will make the intermodel differences in temperature more similar while the intermodel differences in injection rates (and hence FSNT) will be less similar. (This is similar to my comment above in that you’re glossing over the relevance of the injection strategy in interpreting the results, I don’t think you can do that.)
- L200-202, yes, but (worth pointing out somewhere) that to first order, hemispheric asymmetry in the slow response doesn’t matter… that is, if there is hemispheric asymmetry in the *response* to a symmetric CO2 RF, one would expect to also have a counteracting hemispheric asymmetry in the response to a symmetric AOD… see next comment too.
- L205-206 yes, but with a qualifier… that being that *some* of the uncertainty in how the climate responds to CO2 (climate feedbacks that determine the slow response) is actually reduced in the CO2+SAI case, because the same temperature-dependent climate feedbacks operate in response to both forcings (by definition; see e.g. MacMartin, Kravitz and Rasch, 2015 in GRL, which one of the reviewers claimed was too obvious to publish). g., since the purely radiative forcing from CO2 (prior to thinking about cloud adjustments and their resulting RF) is roughly understood (roughly uniform spatially and seasonally), then a hemispherically symmetric AOD would to first order compensate for that, regardless of how hemispherically asymmetric the climate system response to that forcing might be. So this seems a bit too simplistic – the critical observation from your analysis here is that the SAI needs to compensate for the radiative results from the fast-response to CO2 as well…
- L220, important to stress what you’re comparing to. Of course, even in ARISE-SAI-1.5, AMOC is stronger than in SSP2-4.5, it’s just weaker than the reference period… though looking at your figure, if I guess on the missing information not given there, this may be dependent on the metric one chooses to evaluate AMOC strength (at an absolute minimum that needs to be acknowledged). In discussing AMOC response and comparison between GLENS and ARISE, important to stress that in both cases the presence of SAI strengthens AMOC relative to no SAI, but in the GLENS case it is overcompensated (relative to change in global mean temperature) while in ARISE it is undercompensated (relative to that). I don’t think this comes across well… comes across as a fundamental difference in sign which is simply not true – it’s more a question of degree of compensation by SAI. (That is at least true for AMOC metric focused on strength alone; see e.g. Figure 3 in https://www.pnas.org/doi/10.1073/pnas.2202230119; that’s the middle-atmosphere version rather than TSMLT, but the plots are nearly identical for ARISE). The metric considered here that risks confounding changes in pattern with changes in strength… if the conclusion that ARISE strengthens AMOC relative to SSP245 isn’t true for the eof-based method you use here, once you include the relevant baseline case for comparison, then that calls into question how to interpret the eof-based metric.)
- L261, the latitude of SAI injections only depends on it if one wants it to depend on it… of course, there are good reasons to want it to depend, but the current wording is too concise. (Again, one could simply fix the latitudes of injection, set the NH and SH injection rates to be the same, choose them to balance T0 only, and then instead of the uncertainty being in the injection rates, it would manifest as uncertainty in the resulting shift in T1 under SAI…)
- L264, wouldn’t it be fairer to say two versions of the same climate model… the similarities between CESM1 and CESM2 are much more than between them and some other modeling center model.
- L271, the sentence is about SAI but then switches to SCI. Everything in this paper would apply equally well to MCB, but this sentence as written shouldn’t switch.
- L271, the last bit of this sentence is wrong. SAI is *already* a “promising” risk-mitigation measure; “promising” generally suggests that you don’t need to reduce all of the uncertainties
- Fig 2, units on panel f are correctly shown in the figure as “K” and wrong in the legend. (IMO you could get rid of the units on all the subpanels and just state in the caption) BUT, panel f should also be scaled by the amount of cooling offset, otherwise it will greatly overemphasize the residual in GLENS-SAI relative to residuals in ARISE. Ditto Figure 3f. (Actually, now I’m not sure how to interpret panel f at all… I was thinking that if you subtracted the respective reference time-period from each simulation, and then normalized by the amount of cooling, then that would tell you something about the pattern of the residual GHG+SAI in each case… but did you just take some time period and subtract the two, despite the different background emissions and different temperature targets? How is one supposed to interpret that?)
- Fig 5a, should also include the line for CESM2-WACCM-SSP2-4.5, for context, either in addition to or instead of CESM2-WACCM6-SSP585. Having said that, this figure confuses me in multiple ways. I presume the units are Sv? Is the eof calculated once, or is it a different calculation for each model? Each simulation? Each decade? If you keep the same eof pattern, how do you disentangle a change caused by a shift in the strength of the circulation from a shift in the latitude of the peak? I know, for example, because I’ve plotted it, that if you use the maximum of the streamfunction as your AMOC metric, then you will clearly show that ARISE-SAI *recovers* AMOC strength relative to SSP2-4.5 (but not back to 2030 levels), so if that does not hold for your choice of metric, then that point is pretty relevant to point out to the reader – that the conclusions on AMOC depend on what metric you happen to use to calculate it. (I’m guessing that’s true as I’d expect SSP2-4.5 to not be worse than SSP5-8.5.)
- Fig 5 panels b-e, change relative to what? Relative to their respective reference periods, or relative to unmitigated at the same time period?
- Panel 7f isn’t particularly meaningful, given that it isn’t scaled in any way… what’s the message?
Citation: https://doi.org/10.5194/egusphere-2022-779-RC2 - AC2: 'Reply on RC2', John Fasullo, 09 Nov 2022
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
224 | 86 | 12 | 322 | 2 | 3 |
- HTML: 224
- PDF: 86
- XML: 12
- Total: 322
- BibTeX: 2
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Jadwiga H. Richter
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(10823 KB) - Metadata XML