the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Robust assessment of Solar Radiation Modification risks and uncertainties must include shocks and societal feedbacks
Abstract.
- Preprint
(1320 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Review on egusphere-2026-28', Daniele Visioni, 19 Feb 2026
-
CC1: 'Comment on egusphere-2026-28', Ben Kravitz, 19 Feb 2026
I’ll be honest; I struggled with this paper. I agree that a goal of better characterizing risks from SRM is useful. But if that’s the goal, I don’t think this paper meets the bar for either laying out the problem or proposing a solution. I unfortunately must recommend rejection. I will try not to reiterate many of the other reviewers’ points about strawman arguments, although I do share their concerns – the SRM community has done a lot of work that, in this paper, is uncredited.
Essentially, as far as I understand it, the authors propose a framework in which SRM can directly modify the underlying SSP via feedbacks. Good idea. But there are two major problems with this. First, it’s not a new idea – these are climate-society feedbacks, which the authors don’t seem to reference. Second, it’s not clear why this is a unique problem to SRM or why SRM is a useful stress test. The authors point out that the COVID-19 pandemic’s impact on climate was negligible. That was a pretty big shock to the system and highly disruptive throughout the world, as well as very difficult to predict or project. So can the authors provide a sense of why SRM would be a bigger or different shock, or provide some indication as to what modeling challenges it presents that COVID-19 didn’t? This paper presents that as a foregone conclusion, but I don’t think it’s obvious.
I think these issues come down to the authors’ conceptualization of risk and research to reduce risk. There are definitely risks that the current ESM and IAM frameworks don’t capture. But I don’t see why the proposed framework is the right answer. To me it looks so unconstrained (more on that below) that you could kind of get any answer you want, and that’s not a useful way to assess risk either. But again, I’m not sure why SRM specifically presents a problem for ESM/IAM coupling. This is a question that the IAM community has been wrestling with for at least 20 years (and there are numerous citations that I thought I would see about this), well before SRM was being seriously considered as a research topic by that community.
Ultimately it will be impossible to anticipate every single shock that could ever happen. The authors, as well as many other people researching SRM (some of which are cited) have indicated some options. But that doesn’t tell you much about risk. Risk is not a set of “what ifs”. It is a process in which you can use quantitative and qualitative tools to assign probabilities and consequences. For risks that are not reducible, there are techniques that people can use, such as forecasting, projection, and adaptive management. For some reason, these concepts go mostly unmentioned. It is a very important question, and one that the SRM community is specifically grappling with, as to which uncertainties can be reduced, which can be managed, and which must be tolerated.
Relatedly, there are vast swaths of literature that the authors are not referencing. Topics include climate-society feedbacks, model-predictive control, adaptive management, socioeconomic modeling (especially of shocks), risk, catastrophe theory, antecedents of conflict, governance mechanisms, etc. And, of course SRM itself (see the review by Visioni for specific citations). I don’t expect the authors to know everything. But I think proposing an idea requires understanding that many other people have done similar work. If the only goal is to provide a framework (which seems likely, as the authors say “This paper does not prescribe a specific modeling solution”) then I remain unconvinced that the framework is particularly novel, nor helpful for the main problems SRM is facing.
I have plenty of other comments on specific lines, but I don’t include them because I suspect that the revised paper would need to be so different as to make them moot. I think the authors could reformulate their arguments, in line with my comments and other reviewers’ comments, to make a useful contribution about how to quantify risks in SRM, which risks are indeed novel to SRM, and what modeling tools are necessary to address those risks. But unfortunately I don’t think they have succeeded at this task, and I struggle with understanding other purposes that their proposed framework serves.
Citation: https://doi.org/10.5194/egusphere-2026-28-CC1 -
RC2: 'Comment on egusphere-2026-28', Gideon Futerman, 26 Feb 2026
This paper is, in my mind, trying to present a perspective on a rather significant question in the assessment of SRM: how we can integrate the non-ideal dynamics of SRM governance into SRM scenario generation. Given both how significant this question is, and the increasing discussion in the literature on it, a discussion presenting the building blocks of an alternative approach, or building on the present state of the literature, would be very timely.
At its core, I like what the paper is trying to do. Creating novel modelling frameworks that let us make progress on better exploring different futures that consider the full range of socio-political dynamics is an important problem in the field. Indeed, I am less optimistic than other reviewers that the SRM modelling community is making the right trade-offs in this regard at present.
However, I cannot accept the paper as it is. I think the authors can either be taken as making strong claims, which, whilst I believe could be supported, are not here adequately argued for in the paper. Or they can be taken to be pointing out particular flaws in current modelling practices that have generally been pointed out by other authors before. Their proposed architecture, to move beyond previous literature, must be more practical than the current sketch.
This paper does not do an adequate job at engaging with the existing literature on SRM and non-ideal scenarios. I agree with the authors that existing modelling practices mostly use idealised scenarios and have mostly failed to adequately model concerning non-ideal scenarios. However, the failure to engage with the literature leaves the reader with an importantly distorted impression. Firstly, there have been a number of attempts made to engage with more realistic non-ideal scenarios in recent years. Secondly, modellers have generally made the choice to model SRM in a specific way for specific reasons, some stated and some implicit.
The second issue I have is with the supposed subject of the paper. It is never really explained as to what “robust” assessment of SRM entails. What is it that makes an assessment robust, and how high a bar are we drawing? These need to be laid out by the authors. I think there is a strong argument to make that the authors are correct in their claim; however, engagement with what robust risk assessments need to entail is needed to make such a claim. I would also recommend looking into the debates around Global Catastrophic Risk in the context of both SRM and climate change, who have been making similar arguments to this paper for a while.
This argument in fact needs to go further. The paper claims it shows “what must be represented for adequate SRM assessment,”; the authors, if they wish to make this claim, need to show not only that “robust” assessments of SRM must include these shocks but “adequate” (for what purpose?) assessments must as well. This requires laying out the criteria for adequacy of assessment, and being explicit about the value judgements used in such an assessment.
I do think such an argument could be made. I think there are many ways to make it, and I will briefly sketch one out that I am sympathetic to, but I am sure there are similar arguments that the authors could make. Kemp et al. 2022 makes the argument that consideration of Global Catastrophic Risk in the context of climate change is needed for truly adequate risk assessment. Then, works on SRM and GCR (eg Halstead 2018, Futerman and Beard, 2023) have made the argument that the “fast loop dynamics” are the key drivers of Global Catastrophic Risk from SRM. If these claims are taken together, it would suggest that without fast loop assessments, then truly adequate risk assessments cannot be done.
Further to this, for this paper to be actionable, it probably needs to make the argument that the trade offs made by the modelling community that have led them to the current modelling approach are not the correct trade offs to make. Namely, they have to argue that moving closer to what notion of “robust” or “adequate” risk assessment of SRM the authors choose is the direction that the field should move in, even at the cost of less resources devoted to other modelling activities. (Whilst not strictly needed for the paper to make sense, given how normatively loaded this is, such an argument seems important). This paper fails to do this, instead, recapitulating the various (valid) issues of current modelling frameworks. A paper that dove into weighing up when and why we should prioritize the sorts of questions that this approach aids us in would be a far more helpful contribution than the paper currently is, and would move the paper much closer to being publishable.
Another approach the authors could take is to scale down the ambition of their claims, to merely providing a framework that could guide the modelling of non-ideal scenarios. Whilst it is incumbent on them in this case to show that this usefully differs from current proposed approaches, I think they could do that. However, if they were to reduce the ambition of their claims, it appears to me they would need to add significantly to this paper in order for it to be novel enough to publish. Whilst previous papers on non-ideal scenarios don’t contain every element in this paper, enough is similar that I don’t think this paper would cross the novelty bar for publication. With more detailed discussions of how to practically actualise the “fast loop” than the current brief sketch provides, this paper would provide a useful contribution to the literature. Essentially, an expansion of Section 5 that begins to show how the framework is actionable would be what needs to be added to the paper. My view is that the core of the paper should essentially be showing how different approaches in section 5 could be applied to the developed architecture.
On the other side, I do rather like the formalizations present in the paper, and do think the architecture could be the foundation for some very important progress. However, I think this needs to be more fleshed out and actioned before it is publishable. Similarly, I think the argument that such an architecture is needed is important (especially from a GCR perspective), but think the argument needs to be stronger before this is publishable.
Specific Comments:
135-150 I’m basically not sure the argument really works. I did find it quite hard to parse, but it basically seemed to compare SRM to ideal CDR scenarios with purely rational, climate focused, decisionmakers, whereas much of the point of the fast loop is to eliminate some of these idealising features. More broadly, it is not clear to me why the Moral Hazard is in the “Fast Loop” under SRM, but not for CDR scenarios. This seems fairly at odds with the literature on moral hazard, which makes far less of a distinction between moral hazard generated from CDR and SRM.
178: “purely unilateral” It is strange to assume unilateral deployment must be regional. Many discussions of unilateral deployment may assume limited injection sites, but this doesn’t necessarily mean regional deployment
186: I think a number of important to consider scenarios that seem to be unable to be modelled under the operational rules. Counter-geoengineering is a key one. Whilst an argument could be made for including this under Operational Inefficiency, the impacts of counter-geoengineering may be sufficiently difficult that I am unsure that rule will be able to adequately capture the scenarios. Similarly, deployment type is not included, despite the fact that whether it is MCB or SAI or a cocktail of both is both absolutely essential for modelling and may be impacted by the Fast Loop processes. Finally, whilst “Deployment Regionality” covers many scenarios, scenarios where there are multiple deployments, each individually optimized but no joint optimization, is also not discussed. I would recommend revisiting the Operational Rules to allow them to better characterise different types of SRM deployment.
187: Similarly, important scenarios are not considered in Context Loop Properties. Especially termination relevant factors seem to be strongly affected by Societal Vulnerability to Catastrophic Shocks (Baum et al 2012) with a Global Catastrophic Shock amongst the more likely causes of termination. Such shocks do not happen in isolation, they are dependent on the context in which they impact, which is in turn dependent on the rest of the scenario. However, such vulnerability is not included at present in the Context Loop Properties.
364: I think you should at least discuss decision-theoretic frameworks. These are a very important part of robust assessments, and may become even more significant if this approach makes systematic scenario generation harder. This paper seems to really suggest a move in a particular direction for modelling scenarios, towards sets of scenarios where normal multi-model ensembles and systematic assessment is harder. If this is so, then guidance must be included as to what comes next.
379: “ The architecture we have outlined here is not a solution to this problem; it is an attempt to highlight the scope of the risk assessment challenge.”
My assessment is that this is not a unique contribution to the literature in its current form because of this. Other papers have laid out the difficulty of robust risk assessment of SRM from current models, using similar arguments to yours. For this paper to be novel, it must actually start to move towards a solution to the problem.Citation: https://doi.org/10.5194/egusphere-2026-28-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 783 | 320 | 17 | 1,120 | 24 | 21 |
- HTML: 783
- PDF: 320
- XML: 17
- Total: 1,120
- BibTeX: 24
- EndNote: 21
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
The first comment that I have for this piece is that the authors, for some reason, have decided to complete ignore much of the conversation already happening in SRM research about scenarios, and essentially decided to build a strawman of it to criticize, instead of fairly reviewing already existing discussions, engaging with them and only at that point, of course, criticizing them, which is fine as long as this is done on the basis of what past research actually says. I will note that I already made this point when I reviewed another version of this piece for another journal, where it was ultimately not accepted, but there have been no advances by the authors on this aspect. I can only assume they do not find the idea of engaging with recent literature valuable, so, if they persist, at least being this a public forum, I am hoping a reader would also find my comment valuable for added context. As a side note, my review will be signed, and I don’t want to make a mystery of the fact that (as the co-chair of GeoMIP) I was involved in much of this scenario work the authors decided to ignore.
Let’s start with this phrase in the introduction:
“This leaves the most critical, policy-relevant risks of SRM, such as governance failure and geopolitical conflict (Pezzoli et al., 2023; Cherry et al., 2024), abrupt termination (Parker and Irvine, 2018), and regional impact disparities (Heyen et al., 2015) un-modeled by either ESM or IAM paradigms.”
This is just not true. MacMartin et al. (2022) simulated a broad range of scenarios, including termination, phase-out and multiple levels of cooling, and discussed the importance (and challenges) of assessing a broad range of them, as well, and the termination was also a part of one of the initial GeoMIP experiment, G4, which were analyzed for instance in Trisos et al. (2018), and will be a part of the CMIP7 GeoMIP scenarios. Quaglia et al. (2024) further expanded the space to volcanic eruptions happening during SAI (already explored in Laakso et al., 2016), and Brody et al. (2024) and Pfluger et al. (2024) analysed scenarios with delayed deployments and, especially the latter, focused on how it would fail to revert some key tipping dynamics. Very recently, Estrada et al. (2026) also discusses this at length.
More broadly, unlike what the authors say in the conclusions:
“This is why naïve assessment built on idealised assumptions poses such acute risks at this juncture. If the research community evaluates SRM primarily through “best-case” scenarios – smooth deployment, perfect cooperation, orderly termination – it provides a systematically distorted evidence base for decisions that could prove irreversible.”
The discussion of how much and when to simulate non-ideal scenarios was often discussed in GeoMIP, which the authors do not acknowledge. For instance, see the review in Visioni et al. (2023a) (see Section 6.1.1, dedicated explicitly to this, and emerging from very long discussions within the community), and in Visioni et al. (2024) (which the authors cite, but did not bother reading, because the reasoning behind the scenario choice were discussed with some attention, rather than being naive, which in the english language means “a lack of experience, wisdom, or judgment”). I’ll note that both works were the results of years of engagement with the community, talks by experts about scenario generation, evidence gathered from past studies and suggestions from modeling groups, impact analyses groups and governance experts (see the yearly meeting reports here: https://climate.envsci.rutgers.edu/GeoMIP/publications.html#np). Now, it was because, as we laid out in many of these work, it is hard to have coordinate, multi-model experiments that are both able to shine a light on the climatic uncertainties and can meaningfully span the scenario space, in Visioni et al. (2023a) there was an ample discussion of the potential to use more idealized simulations to train climate emulators that would be able to better span the space, instead. Hence the line of work in Farley et al. (2024) and Farley et al. (2026) to develop such an emulator that is capable both of exploring the space of interruptions and terminations (Farley et al., 2024) and that of uncoordinated deployments and regional impacts (Farley et al., 2026) using a broader range of idealized simulations described in the GeoMIP testbed experiments in Visioni et al. (2023b). There is other literature on such non-optimal scenarios including single-case models simulations, as well, which demonstrate the community is indeed not leaving this aside: see Diao et al. (2025) for a case of unilateral action, Wan et al. (2024) for a case using MCB, Kwiatkowski et al. (2015) for another “edge” case of intervention, Jackson et al. (2015) for a “real world” implementation of SAI targeted at restoring sea-ice considering imperfect observations, etc. But what made those cases useful to analyse and possible to model was the work of more idealized or regular simulations that allowed for models’ and scenarios improvement, and diagnosis of uncertainty.
Lastly, the authors also ignored large pieces of the literature in realms different from just climate science, curiously forgetting also the ones that offer various fair criticism of current scenarios, but that at least try to engage in the broader discourse rather than sidestep it. See for instance Wiertz et al. (2015), McLaren and Corry (2021), Buck (2022), Keys (2023), Beckage et al. (2025) (which explicitly talks about a lot of the “societal feedbacks” in this work’s title…) and much more.
Now, I could probably continue on this line, but I want to focus on why I made (again) the list of works this list of authors missed. A perspective piece is not a review piece, and yet it must fairly address others’ perspectives before engaging in new proposals. The authors fail to do so, and given my previous feedback, I can’t assume due to naivete on their part, but due to a deliberate choice to erase other perspectives. If the authors wanted to engage, they would describe how their proposed framework integrates them, rather than ignore them.
Moving on to my second point, the authors often repeat throughout the piece that any other framework but theirs is wholly inadequate, naive, etc. mainly because there is something, in SRM, that makes this kind of exploration of the broader set of scenarios they propose conditio sine qua non for for any robust assessment. It would be interesting to know how the authors define “robustness”. Personally, and also in the context of IPCC assessments, robustness is something I identify with perspectives such as Lloyd (2010, 2015) related to climate models’ robustness: in her words robustness is the “repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions”. In this sense, a robust SRM assessment is one where the tools used for the assessments are well validated (using historical analogues, of course) and, across models’ generations, reassessed, and the uncertainties well quantified. On the other hand, robustness in this work is implied only or mainly as the exploration of a correct set of scenarios which the authors propose.
There are two reasons why I think this is problematic: the first is that it is unclear how many of the scenarios proposed by the authors would be helpful for understanding physical processes and inter-model uncertainties. Indeed some of those scenarios essentially do not assess a world in which SRM is implemented, but rather assess a world in which SRM is not implemented or fails, and in which what is assessed is mainly a specific assumption about “moral hazard” and the impacts that changes in emissions due to failed/not efficient SRM would have on climate. On the topic of moral hazard, which the authors often mention, it is important to note that they never specifically identify what they mean by it, nor do they acknowledge existing literature on the topic. It is, instead, taken as a given, with (unreferenced) phrases like “SRM’s moral hazard, by contrast, operates through rapid, sub-decadal climate responses coupling directly to fast political dynamic”. I don’t think this or similar assertions are supported in any way in the available experimental literature, that often finds the opposite, see for instance Andrews et al. (2022), Cherry et al. (2023); whereas if the authors wanted to partly support their assertions, they could point to Abatayo et al. (2020) and Acemoglu and Rafey (2018).
The second reason why I think the authors’ assertions about the need to consider their framework of scenarios as the only way to make valid SRM assessment are problematic is that criticizing the scenarios used as “unrealistic” to affirm that climate science assessments are wrong or biased was one of the central arguments of the now withdrawn “A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate” (Climate Working Group, 2025). Quoting from there “Widespread use of RCP8.5 as a no-policy baseline has created a bias towards alarm in the climate impacts literature.”; this argument is not much different from the one the authors use here. One can of course criticize single scenarios (they are, after all, not meant to be predictions!), and propose different ones, but only insofar as it remains clear that the underlying science done on top of these scenarios is valid and useful, or prove otherwise. The IPCC reports are no less robust because they use scenarios that might (or might not) be unrealistic, or currently deprecated. Why, in the case of SRM, this should be different is something the authors justify because “This simplification can be justified (for greenhouse gas mitigation modeling) by societal and techno-economic as well as physical considerations. On the societal and techno-economic side, future emission trajectories are dominated by long-term developments including population dynamics, economic development and emission intensity of the economy.” and “The climate system’s response to GHG emissions is slowly emerging over multi-decadal horizons. This lag effectively enables “fast terms” to be averaged out when modeling the climate effect of emissions pathways”. I don’t think this is persuasive, especially since there is a lot of emerging science showing that, if one takes the broader view of short-lived forcers and not just CO2, climatic impacts arising from specific emission pathways can emerge quite soon (see Samset et al., 2025 or Gettelman et al., 2024 as examples). Does the emergence of these new understandings about rapid climate responses to mitigation mean that current climate change assessments do not meet “minimum conditions for assessment” criteria? I don’t think so, but it would flow from the authors’ arguments. Similarly, there are other MIPs that, very legitimately, use idealized pathways to explore models' behavior, and whose conclusions are then used for policy-relevant decision making or for broad dissemination of results. With a large overlap in authorship list, ZECMIP is one of those (Jones et al., 2019), and now flat10MIP. I always found utterly puzzling the (widespread) criticisms of ZECMIP results about zero emission commitments because they're too idealized, or don't include some key dynamics like aerosol emissions in most simulations. The simplicity of the scenarios devised is a merit, and the inter-model spread they show an incredibly important result.
My third main point deals with the practicalities of what the authors propose. Section 4 concludes with the assertion that the authors are identifying “what must be represented” to meet their threshold of a meaningful assessment. However, section 5 starts by saying that “This paper does not prescribe a specific modeling solution”. So… what? The final assertion from the Conclusions seems to be that, if one thinks about the large unpredictability of societal and geopolitical behaviors, there is simply no way to model or conceptualize any future, and there should be no attempt to do so, because it will fail; but that, while this is not that big of a problem for climate mitigation, predictive power over societal behavior is such a fundamental part of any eventual SRM assessment that simply nothing of value could be said about what SRM would do to the climate, and thinking otherwise is “such an acute risk”. Ultimately then, as the authors say in the last phrase, the framework they themselves propose is not even an attempt at a solution. It’s just pointing at a very complicated scheme on a board and saying “see, this is just too complicated, let’s just give up”. I don’t think this is true, but mainly I don’t see this as being useful. I think current research, rather than being naive in assessing “SRM primarily through “best-case” scenarios”, is very purposely at the stage where its main service is in identifying knowledge gaps and highlighting areas of certainty and uncertainty. Plenty of research that is available right now based on “naive” scenarios is already capable of identifying and discussing trade-offs and uncertainties in the context of SRM (to avoid other references, let me just point the authors to the rather comprehensive list of research in particular emerging from the Global South here: https://www.degrees.ngo/published-research/ ), and much has been learned about models’ behavior, risky strategies (overcooling of the tropics under equatorial injections, or hemispherically-imbalanced SRM) and potential impacts across a variety of Earth systems, from ozone impacts to agriculture. How would these assessments have benefitted from the broader set of scenarios proposed here? Would they have yielded more robust insight into the inner working of climate, or the discrepancies between models? The authors need to make the case for why the specific realizations of chaos (the sub-optimal, failure-mode scenarios) they propose are a requirement for producing the generalized insights that an overarching assessment must return, and that they think a “simpler” modeling framework doesn’t provide. Introducing chaos into the scenarios one assesses is not the best way to assess the consequences of chaos, in fact, it undermines the effort. Orderly, scientifically motivated experimental design can provide the greatest insights into the consequences of well-managed, idealized SRM as well as the failure modes in chaotic versions of SRM. The climate response in these simpler, cleaner experiments that yield valuable physical insight can then be leveraged in emulators and simpler modeling frameworks to illustrate the consequences of specific instances of chaos (as is happening already), without claiming that the lack of such scenarios in the context of already existing modeling framework is a great moral and scientific fault.
This doesn’t mean that non-ideal cases can’t be explored, but rather that their exploration has to be both systematic and practical. What the authors of this piece do is situate their proposal ambiguously between a simple classification of scenarios (but that was already done for instance in Lockley et al., 2022) and the proposal to actually integrate the fast-looped dynamics of both behavioral dynamics and political ones, without ever detailing how this could be done practically. But suggestions on how to do this exist: for instance, Beckage et al. (2025) proposed the inclusion of behavioral and cognitive processes in scenario exploration of SRM using social climate models (see Beckage et al., 2022). The difference between the Beckage proposal and this one is that in the other case, the explicit intent is to explore, using modeling tools that already exist and can be developed, how our own assumptions about “moral hazard” affect scenario generation, and then classify them, rather than hand-picking “ultimate moral hazard” cases a priori.
Ultimately, the authors’ assertion that previous assessments are not robust is their own, and this being a perspective, they’re within their right to have their own opinion. But I don’t think their assertion is supported or robust itself. If the authors want to pursue the publication of this piece, and the editor agrees, then I can only hope a revised version will acknowledge the presence of others’ perspectives established over time, that it will avoid misrepresenting them, and avoid too broad claims about others’ failures while failing to suggest operable solutions themselves.
References
A.L. Abatayo,V. Bosetti,M. Casari,R. Ghidoni, & M. Tavoni, Solar geoengineering may lead to excessive cooling and high strategic uncertainty, Proc. Natl. Acad. Sci. U.S.A. 117 (24) 13393-13398, https://doi.org/10.1073/pnas.1916637117 (2020).
Acemoglu, D. and Rafey, W., 2018. Mirage on the horizon: Geoengineering and carbon taxation without commitment (No. w24411). National Bureau of Economic Research.
Andrews, T.M., Delton, A.W. and Kline, R., 2022. Anticipating moral hazard undermines climate mitigation in an experimental geoengineering game. Ecological Economics, 196, p.107421.
Beckage, B., Lacasse, K., Raimi, K.T. and Visioni, D., 2025. Models and scenarios for solar radiation modification need to include human perceptions of risk. Environmental Research: Climate, 4(2), p.023003.
Beckage B, Moore F C and Lacasse K 2022 Incorporating human behaviour into Earth system modelling Nat Hum. Behav. 6 1493–502
Brody, E., Visioni, D., Bednarz, E.M., Kravitz, B., MacMartin, D.G., Richter, J.H. and Tye, M.R., 2024. Kicking the can down the road: understanding the effects of delaying the deployment of stratospheric aerosol injection. Environmental Research: Climate, 3(3), p.035011.
Buck HJ (2022) Environmental Peacebuilding and Solar Geoengineering. Front. Clim. 4:869774. doi: 10.3389/fclim.2022.869774
Cherry, T.L., Kroll, S., McEvoy, D.M., Campoverde, D. and Moreno-Cruz, J., 2023. Climate cooperation in the shadow of solar geoengineering: an experimental investigation of the moral hazard conjecture. Environmental politics, 32(2), pp.362-370.
Climate Working Group (2025) A Critical Review of Impacts of Greenhouse Gas Emissions on the U.S. Climate. Washington DC: Department of Energy, July 23, 2025
Diao, C., Keys, P. W., Bell, C. M., Barnes, E. A., & Hurrell, J. W. (2025). A model study exploring the decision loop between unilateral stratospheric aerosol injection scenario design and Earth system simulations. Earth's Future, 13, e2024EF005455. https://doi.org/10.1029/2024EF005455
Estrada, F., Bastien-Olvera, B.A., Calderon-Bustamante, O., Altamirano, M.A., Muñoz-Sánchez, R., Moreno-Cruz, J. and Botzen, W., 2026. Economic assessment of SRM under socio-political and geophysical tipping dynamics. Environmental Research: Climate, 5(1), p.015015.
Laakso, A., Kokkola, H., Partanen, A.-I., Niemeier, U., Timmreck, C., Lehtinen, K. E. J., Hakkarainen, H., and Korhonen, H.: Radiative and climate impacts of a large volcanic eruption during stratospheric sulfur geoengineering, Atmos. Chem. Phys., 16, 305–323, https://doi.org/10.5194/acp-16-305-2016, 2016.
Lloyd, Elisabeth A. “Confirmation and Robustness of Climate Models.” Philosophy of Science 77, no. 5 (2010): 971–84. https://doi.org/10.1086/657427.
Jackson, L. S., Crook, J. A., Jarvis, A., Leedal, D., Ridgwell, A., Vaughan, N., & Forster, P. M. (2015). Assessing the controllability of Arctic sea ice extent by sulfate aerosol geoengineering. Geophysical Research Letters, 42(4), 1223–1231. https://doi.org/10.1002/2014GL062240
Jones, C. D., Frölicher, T. L., Koven, C., MacDougall, A. H., Matthews, H. D., Zickfeld, K., Rogelj, J., Tokarska, K. B., Gillett, N. P., Ilyina, T., Meinshausen, M., Mengis, N., Séférian, R., Eby, M., and Burger, F. A.: The Zero Emissions Commitment Model Intercomparison Project (ZECMIP) contribution to C4MIP: quantifying committed climate changes following zero carbon emissions, Geosci. Model Dev., 12, 4375–4385, https://doi.org/10.5194/gmd-12-4375-2019, 2019.
Keys, P. W., 2023. The plot must thicken: a call for increased attention to social surprises in scenarios of climate futures, Environ. Res. Lett. 18 081003
Kwiatkowski, L. et al 2015. Atmospheric consequences of disruption of the ocean thermocline Environ. Res. Lett. 10 034016
Farley, J., MacMartin, D.G., Visioni, D. and Kravitz, B., 2024. Emulating inconsistencies in stratospheric aerosol injection. Environmental Research: Climate, 3(3), p.035012.
Farley, J., MacMartin, D. G., Visioni, D., Kravitz, B., Bednarz, E., Duffey, A., and Henry, M.: A Climate Intervention Dynamical Emulator (CIDER) for Scenario Space Exploration, Geophysical Model Development [preprint], https://doi.org/10.5194/egusphere-2025-1830, 2026, accepted.
Gettelman, A., Christensen, M. W., Diamond, M. S., Gryspeerdt, E., Manshausen, P., Stier, P., et al. (2024). Has reducing ship emissions brought forward global warming? Geophysical Research Letters, 51, e2024GL109077. https://doi.org/10.1029/2024GL109077
Lockley, A., Xu, Y., Tilmes, S., Sugiyama, M., Rothman, D., & Hindes, A. (2022). 18 Politically relevant solar geoengineering scenarios. Socio-Environmental Systems Modelling, 4, 18127. https://doi.org/10.18174/sesmo.18127
MacMartin, D.G., Visioni, D., Kravitz, B., Richter, J.H., Felgenhauer, T., Lee, W.R., Morrow, D.R., Parson, E.A. and Sugiyama, M., 2022. Scenarios for modeling solar radiation modification. Proceedings of the National Academy of Sciences, 119(33), p.e2202230119.
McLaren, D. and Corry, O.: Clash of Geofutures and the Remaking of Planetary Order: Faultlines underlying Conflicts over Geoengineering Governance, Global Policy, 12, 20–33 2021
Pflüger, D., Wieners, C. E., van Kampenhout, L., Wijngaard, R. R., & Dijkstra, H. A. (2024). Flawed emergency intervention: Slow ocean response to abrupt stratospheric aerosol injection. Geophysical Research Letters, 51, e2023GL106132. https://doi.org/10.1029/2023GL106132
Samset, B.H., Wilcox, L.J., Allen, R.J. et al. East Asian aerosol cleanup has likely contributed to the recent acceleration in global warming. Commun Earth Environ 6, 543 (2025). https://doi.org/10.1038/s43247-025-02527-3
Trisos, C.H., Amatulli, G., Gurevitch, J., Robock, A., Xia, L. and Zambri, B., 2018. Potentially dangerous consequences for biodiversity of solar geoengineering implementation and termination. Nature Ecology & Evolution, 2(3), pp.475-482.
Quaglia, I., Visioni, D., Bednarz, E.M., MacMartin, D.G. and Kravitz, B., 2024. The potential of stratospheric aerosol injection to reduce the climatic risks of explosive volcanic eruptions. Geophysical Research Letters, 51(8), p.e2023GL107702.
Visioni, D., Kravitz, B., Robock, A., Tilmes, S., Haywood, J., Boucher, O., Lawrence, M., Irvine, P., Niemeier, U., Xia, L., Chiodo, G., Lennard, C., Watanabe, S., Moore, J. C., and Muri, H.: Opinion: The scientific and community-building roles of the Geoengineering Model Intercomparison Project (GeoMIP) – past, present, and future, Atmos. Chem. Phys., 23, 5149–5176, https://doi.org/10.5194/acp-23-5149-2023, 2023a.
Wan, J.S., Chen, CC.J., Tilmes, S. et al. Diminished efficacy of regional marine cloud brightening in a warmer world. Nat. Clim. Chang. 14, 808–814 (2024). https://doi.org/10.1038/s41558-024-02046-7
Wiertz, T. (2015). Visions of Climate Control: Solar Radiation Management in Climate Simulations: Solar Radiation Management in Climate Simulations. Science, Technology, & Human Values, 41(3), 438-460. https://doi.org/10.1177/0162243915606524 (Original work published 2016)
Visioni, D., Bednarz, E. M., Lee, W. R., Kravitz, B., Jones, A., Haywood, J. M., and MacMartin, D. G.: Climate response to off-equatorial stratospheric sulfur injections in three Earth system models – Part 1: Experimental protocols and surface changes, Atmos. Chem. Phys., 23, 663–685, https://doi.org/10.5194/acp-23-663-2023, 2023.