the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Brief communication: Sea-level projections, adaptation planning, and actionable science
Abstract. As climate scientists seek to deliver actionable science for adaptation planning, there are risks in using novel results to inform decision-making. Premature acceptance can lead to maladaptation, confusion, and practitioner “whiplash”. We propose that scientific claims should be considered actionable only after meeting a confidence threshold based on the strength of evidence as evaluated by a diverse group of scientific experts. We discuss an influential study that projected rapid sea-level rise from Antarctic ice-sheet retreat but in our view was not actionable. We recommend regular, transparent communications between scientists and practitioners to support the use of actionable science.
- Preprint
(1092 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
CC1: '"Actionable" for whom, in what decision context?', Robert Kopp, 15 Mar 2024
I read this brief comment with interest, and found some core issues troubling.
Fundamentally, the authors discuss 'actionable' science, but they discuss it stripped of context. Actions are defined by the American Heritage Dictionary as 'organized activity to accomplish an objective'. Science cannot be judged to be actionable, or not, outside the context of an organized activity and an objective. It makes little sense to talk about something being 'actionable' in general, outside of a specific decision context.
The authors neglect the extensive literature on decision science and risk analysis relevant to using sea-level projections in adaptation decision making. For a relatively recent review, see Keller, K., Helgeson, C., & Srikrishnan, V. (2021). Climate risk management. Annual Review of Earth and Planetary Sciences, 49, 95-116, https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-080320-055847.
In the specific context of communicating sea-level uncertainty and ambiguity, the authors should also see Kopp, R. E., Oppenheimer, M., O’Reilly, J. L., Drijfhout, S. S., Edwards, T. L., Fox-Kemper, B., ... & Xiao, C. (2023). Communicating future sea-level rise uncertainty and ambiguity to assessment users. Nature climate change, 13(7), 648-660, https://www.nature.com/articles/s41558-023-01691-8. Given the direct relevance, this latter omission is particularly surprising.
Why do the organized activity and the objective matter?
Broadly, high-end sea-level rise scenarios, including low-confidence processes, are valuable in flexible, adaptive decision-making. This is shown by a number of papers, but perhaps most clearly and directly for this context in a preprint by Feng et al. (https://doi.org/10.22541/essoar.170914510.03388005/v1 ).
Among other analyses, Feng et al. compare idealized protection schemes for Manhattan under (1) a static optimal approach, where a single sea wall elevation must be picked based on available knowledge today, and (2) a variety of dynamic approaches, where sea wall height can be periodically adjusted based on new information. (I focus particularly on the 'reinforcement learning' approach described therein).
They consider two cases where projects are planned under inaccurate sea-level rise projections: (A) where planning takes place under the SSP5-8.5 low-confidence projections but the reality corresponds to SSP2-4.5 medium-confidence projections, and (B) where planning takes place under the SSP2-4.5 medium-confidence projections by reality corresponds to the SSP5-8.5 low-confidence projections.
In the former case -- where high-end projections are used and reality underperforms -- the expected net present value cost is $2.3 billion, $1.0 billion more than with the correct (lower) distribution, if a static approach is taken. With a flexible approach, the expected net present value cost is $1.0 billion, just $0.1 billion more than if the correct distribution is chosen.
However, in the latter case -- where middle-of-the-road projections are used and reality overperforms -- the expected net present value cost is $15 billion, $12 billion more than with the correct (high-end) distribution if a static optimal approach is taken. With a flexible approach, the expected net present value cost is $3.9 billion, $0.9 billion more than if the high-end distribution had been used.
Costs associated with mis-estimating sea-level rise (Expected NPV, $ billion) Static plan Dynamic plan Overestimated sea-level distribution (use SSP5-8.5 LC, get SSP2-4.5 MC) $1.0 $0.1 Underestimated sea-level distribution (use SSP2-4.5 MC, get SSP5-8.5 LC) $11.6 $2.4 Thus, with a dynamic approach, using high-end projections that capture low-confidence processes makes a lot of economic sense. Such an approach cuts of the tail risk at relatively small additional cost. (In fact, the cost of a static optimal approach using the correct distribution in a middle-of-the-road world is more than the cost of using a dynamic approach with the overestimated, high-end distribution.)
However, with a static approach, the costs of getting the distribution wrong are more substantial (though an order of magnitude larger if the distribution is underestimated than if it is overestimated).
In truth, I think the concern the authors address is not one with scientists offering practitioners low-confidence, high-end projections as part of the domain of plausible futures. It is with how these projections are then used.
As the Feng et al. analysis, and others, indicate, the most economic approach given substantial uncertainty and ambiguity is most often the dynamic one. Where a static approach must be used, whether due to inability to undertake a dynamic approach or regulatory inflexibility, then benefit-cost theory tells us what needs to be taken into account in order to determine the best option. This includes:
1) The benefit in terms of reduced risk associated with choosing different adaptation levels
2) The cost in terms of additional adaptive expenditures in terms of choosing different adaptation levels
3) The discount rate used to tradeoff present adaptation costs and future harms
4) The risk aversion that determines how much weight is given to the high-end of the cost distribution
5) The ambiguity aversion that determines how much weight is given to different alternative probability distribution for sea level and thus for cost.
Where the costs and benefits of adaptation are comparable, discomfort will arise if regulatory guidance specifies a single adaptation target stripped of context, because a user's risk and ambiguity aversion applies to both the costs and benefits of adaptation, not just the benefits.
I suspect that the authors' concern with the actionability of projections incorporating low-confidence processes is misaimed. Given appropriately flexible decision frameworks, as Feng et al. show, we are better off incorporating such high-end projections.
Regulations that rigidly prescribe the use of specific high-end projections in static contexts, however, run the risk of leading to sub-optimal outcomes. It may be appropriate for policy to set discount, risk aversion, and ambiguity aversion levels for specific contexts; this is a matter where different political philosophies will lead to different judgements. However, given these parameters, identifying the benefit-cost optimal outcome requires considering the net value of adaptation benefits and adaptation costs under these parameters. If costs and benefits are comparable, overly rigid targets might cut off the long tail of sea-level harms but create a long tail of adaptation cost overruns.
In short, the authors have chosen the wrong target. Scientists should strive to communicate not just projections that incorporate processes for which there is a high degree of evidence, but also processes that are of potentially great significance but less agreement and evidence -- as AR6 has done. It is, however, important that actions be guided by decision frameworks that correctly reflect the nature of the information provided.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC1 -
CC2: 'Comment on egusphere-2024-534', Chris P. Weaver, 18 Mar 2024
I appreciate the opportunity to comment, since I would like to point out, and help correct, an error in the paper.
Specifically, on page 6, the authors have made the following statement about the influence of DeConto and Pollard (2016) on the U.S. interagency sea level rise scenarios report of Sweet et al. (2017) (my emphasis):
“To be consistent with “recent updates to the peer-reviewed scientific literature”, they issued an “Extreme” global mean sea-level projection of 2.5 m by 2100 for RCP8.5, exceeding the previous upper bound of 2.0 m based on Pfeffer et al. (2008). Their Extreme projection relied on a large AIS contribution, based primarily on DP16. [Footnote 4]
[Footnote 4] The 2.0 m upper bound of Pfeffer et al. (2008) assumed large contributions from both ice sheets: 0.54 m from the GrIS and 0.62 m from the AIS. In AR5, Church et al. (2013) estimated a likely upper bound of just 0.21 m for the GrIS, since process models do not support “the order of magnitude increase in flow” in Pfeffer et al. (2008). To reach an upper bound of 2.0 m or more, S17 therefore needed an increased AIS contribution of ∼1.0 m or more. To support this increase, they cited DP16 along with the expert-judgment assessment of Bamber and Aspinall (2013). However, the latter study gave a high-end (95th percentile) estimate of 0.84 m SLR from the two ice sheets, less than Pfeffer et al. (2008). Other studies cited by S17 did not give independent evidence of a large AIS contribution. Thus, both the “High” projection of 2.0 m and the “Extreme” projection of 2.5 m in S17 relied on DP16’s claim that the AIS could contribute at least a meter of SLR by 2100."
The highlighted statement is a misstatement of fact. The accompanying footnote is also erroneous, as well as the part of the statement, on page 7, that references DeConto and Pollard (2016) in the context of Sweet et al. (2017), i.e., “(2) S17 and other reports that were published in 2016–2020 and relied on DP16 for the AIS contribution.”
Here is the relevant paragraph from page 14 in Sweet et al. (2017):
“The growing evidence of accelerated ice loss from Antarctica and Greenland only strengthens an argument for considering worst-case scenarios in coastal risk management. Miller et al. (2013) and Kopp et al. (2014) discuss several lines of arguments that support a plausible worst-case GMSL rise scenario in the range of 2.0 m to 2.7 m by 2100: (1) The Pfeffer et al. (2008) worst-case scenario assumes a 30-cm GMSL contribution from thermal expansion. However, Sriver et al. (2012) find a physically plausible upper bound from thermal expansion exceeding 50 cm (an additional ~20-cm increase). (2) The ~60 cm maximum contribution by 2100 from Antarctica in Pfeffer et al. (2008) could be exceeded by ~30 cm, assuming the 95th percentile for Antarctic melt rate (~22 mm/year) of the Bamber and Aspinall (2013) expert elicitation study is achieved by 2100 through a linear growth in melt rate. (3) The Pfeffer et al. (2008) study did not include the possibility of a net decrease in land-water storage due to groundwater withdrawal; Church et al. (2013) find a likely land-water storage contribution to 21st century GMSL rise of -1cm to +11 cm. Thus, to ensure consistency with the growing number of studies supporting upper GMSL bounds exceeding Pfeffer et al. (2008)’s estimate of 2.0 m by 2100 (Sriver et al., 2012; Bamber and Aspinall, 2013; Miller et al., 2013; Rohling et al., 2013; Jevrejeva et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Kopp et al., 2014) and the potential for continued acceleration of mass loss and associated additional rise contributions now being modeled for Antarctica (e.g., DeConto and Pollard, 2016), this report recommends a revised worst-case (Extreme) GMSL rise scenario of 2.5 m by 2100.”
In developing the report, we wished to provide a number of scenarios that could be used to fully bracket the evidence base for the physically possible 21-st century sea level rise, as well as providing expert judgement about the central tendency/best guess trajectory. These ranged from 0.3 m at the lowest of the lower bounds to 2.5 m at the uppermost of the upper bounds. Briefly, the motivation was to support as wide a possible range of decision contexts as existed at the time in coastal risk planning and management (e.g., see Hinkel et al., 2015, Nature Climate Change, and many others), including long-term adaptation pathways approaches and “stress test” type applications, both of which often use a “not-to-be-exceeded,” upper bound metric of performance.
As described in the quoted paragraph from Sweet et al. (2017), above, we arrived at the 2.5 m upper bound by synthesizing a number of lines of evidence from numerous studies, as well as the IPCC AR5, to individually interrogate the physically possible ranges of the contributing components to global-mean sea level rise. This was new evidence, and/or new synthesis of that evidence, since Pfeffer et al. (2008), the study that helped define the physically possible upper bound for a preceding U.S. interagency sea level rise scenarios report (Parris et al., 2012).
All of these studies predated the publication of DeConto and Pollard (2016); we had already decided on the 2.5 m upper bound, and completed most of the work of developing the global and regional scenarios, before that paper was published. As just one example, Kopp et al. (2014) estimated 2.45 m as the 99.9 percentile outcome for global-mean sea level rise in 2100 under RCP8.5. Once DeConto and Pollard (2016) was published, we added it to our citation list as another piece of evidence, but the conclusions of that paper had no influence on our choice of 2.5 m as the upper bounding scenario. The successor report to Sweet et al. (2017), i.e., Sweet et al. (2022), stated this clearly, as well (e.g., see page 11): “In Sweet et al. (2017), these scenarios were developed to span a range of 21st-century GMSL rise from 0.3 m to 2.5 m. Sweet et al. (2017) built these scenarios upon the probabilistic emissions scenario–driven projections of Kopp et al. (2014).”
The bottom line is that, if DeConto and Pollard (2016) had never been published, we would have written exactly the same report at the time that we wrote it.
In closing, I wanted to note that, on the initiative of one of the authors of this brief communication (DB), he and a number of others of us (including myself and Kopp, as well as DeConto) spent substantial time in productive discussions of the very points I have just summarized, and related topics, in the broader context of the nuances of using cutting-edge sea level rise science to support decision-making. These extensive discussions following the publication of Sweet et al. (2017) resulted in an AGU presentation by DB (see https://par.nsf.gov/servlets/purl/10066643), and a written summary of our engagement (see https://acwi.gov/climate_wkg/minutes/final_agu_consensus_statement_probabilisitic_projections_dec_2017.pdf), both of which reflected a useful integration of our diversity of perspectives as scientists and practitioners.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC2 -
RC1: 'Reply on CC2', Chris P. Weaver, 09 Apr 2024
[Converting my community comment into a formal review comment, at the Editor's request]
I appreciate the opportunity to comment, since I would like to point out, and help correct, an error in the paper.
Specifically, on page 6, the authors have made the following statement about the influence of DeConto and Pollard (2016) on the U.S. interagency sea level rise scenarios report of Sweet et al. (2017) (my emphasis):
“To be consistent with “recent updates to the peer-reviewed scientific literature”, they issued an “Extreme” global mean sea-level projection of 2.5 m by 2100 for RCP8.5, exceeding the previous upper bound of 2.0 m based on Pfeffer et al. (2008). Their Extreme projection relied on a large AIS contribution, based primarily on DP16. [Footnote 4]
[Footnote 4] The 2.0 m upper bound of Pfeffer et al. (2008) assumed large contributions from both ice sheets: 0.54 m from the GrIS and 0.62 m from the AIS. In AR5, Church et al. (2013) estimated a likely upper bound of just 0.21 m for the GrIS, since process models do not support “the order of magnitude increase in flow” in Pfeffer et al. (2008). To reach an upper bound of 2.0 m or more, S17 therefore needed an increased AIS contribution of ∼1.0 m or more. To support this increase, they cited DP16 along with the expert-judgment assessment of Bamber and Aspinall (2013). However, the latter study gave a high-end (95th percentile) estimate of 0.84 m SLR from the two ice sheets, less than Pfeffer et al. (2008). Other studies cited by S17 did not give independent evidence of a large AIS contribution. Thus, both the “High” projection of 2.0 m and the “Extreme” projection of 2.5 m in S17 relied on DP16’s claim that the AIS could contribute at least a meter of SLR by 2100."
The highlighted statement is a misstatement of fact. The accompanying footnote is also erroneous, as well as the part of the statement, on page 7, that references DeConto and Pollard (2016) in the context of Sweet et al. (2017), i.e., “(2) S17 and other reports that were published in 2016–2020 and relied on DP16 for the AIS contribution.”
Here is the relevant paragraph from page 14 in Sweet et al. (2017):
“The growing evidence of accelerated ice loss from Antarctica and Greenland only strengthens an argument for considering worst-case scenarios in coastal risk management. Miller et al. (2013) and Kopp et al. (2014) discuss several lines of arguments that support a plausible worst-case GMSL rise scenario in the range of 2.0 m to 2.7 m by 2100: (1) The Pfeffer et al. (2008) worst-case scenario assumes a 30-cm GMSL contribution from thermal expansion. However, Sriver et al. (2012) find a physically plausible upper bound from thermal expansion exceeding 50 cm (an additional ~20-cm increase). (2) The ~60 cm maximum contribution by 2100 from Antarctica in Pfeffer et al. (2008) could be exceeded by ~30 cm, assuming the 95th percentile for Antarctic melt rate (~22 mm/year) of the Bamber and Aspinall (2013) expert elicitation study is achieved by 2100 through a linear growth in melt rate. (3) The Pfeffer et al. (2008) study did not include the possibility of a net decrease in land-water storage due to groundwater withdrawal; Church et al. (2013) find a likely land-water storage contribution to 21st century GMSL rise of -1cm to +11 cm. Thus, to ensure consistency with the growing number of studies supporting upper GMSL bounds exceeding Pfeffer et al. (2008)’s estimate of 2.0 m by 2100 (Sriver et al., 2012; Bamber and Aspinall, 2013; Miller et al., 2013; Rohling et al., 2013; Jevrejeva et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Kopp et al., 2014) and the potential for continued acceleration of mass loss and associated additional rise contributions now being modeled for Antarctica (e.g., DeConto and Pollard, 2016), this report recommends a revised worst-case (Extreme) GMSL rise scenario of 2.5 m by 2100.”
In developing the report, we wished to provide a number of scenarios that could be used to fully bracket the evidence base for the physically possible 21-st century sea level rise, as well as providing expert judgement about the central tendency/best guess trajectory. These ranged from 0.3 m at the lowest of the lower bounds to 2.5 m at the uppermost of the upper bounds. Briefly, the motivation was to support as wide a possible range of decision contexts as existed at the time in coastal risk planning and management (e.g., see Hinkel et al., 2015, Nature Climate Change, and many others), including long-term adaptation pathways approaches and “stress test” type applications, both of which often use a “not-to-be-exceeded,” upper bound metric of performance.
As described in the quoted paragraph from Sweet et al. (2017), above, we arrived at the 2.5 m upper bound by synthesizing a number of lines of evidence from numerous studies, as well as the IPCC AR5, to individually interrogate the physically possible ranges of the contributing components to global-mean sea level rise. This was new evidence, and/or new synthesis of that evidence, since Pfeffer et al. (2008), the study that helped define the physically possible upper bound for a preceding U.S. interagency sea level rise scenarios report (Parris et al., 2012).
All of these studies predated the publication of DeConto and Pollard (2016); we had already decided on the 2.5 m upper bound, and completed most of the work of developing the global and regional scenarios, before that paper was published. As just one example, Kopp et al. (2014) estimated 2.45 m as the 99.9 percentile outcome for global-mean sea level rise in 2100 under RCP8.5. Once DeConto and Pollard (2016) was published, we added it to our citation list as another piece of evidence, but the conclusions of that paper had no influence on our choice of 2.5 m as the upper bounding scenario. The successor report to Sweet et al. (2017), i.e., Sweet et al. (2022), stated this clearly, as well (e.g., see page 11): “In Sweet et al. (2017), these scenarios were developed to span a range of 21st-century GMSL rise from 0.3 m to 2.5 m. Sweet et al. (2017) built these scenarios upon the probabilistic emissions scenario–driven projections of Kopp et al. (2014).”
The bottom line is that, if DeConto and Pollard (2016) had never been published, we would have written exactly the same report at the time that we wrote it.
In closing, I wanted to note that, on the initiative of one of the authors of this brief communication (DB), he and a number of others of us (including myself and Kopp, as well as DeConto) spent substantial time in productive discussions of the very points I have just summarized, and related topics, in the broader context of the nuances of using cutting-edge sea level rise science to support decision-making. These extensive discussions following the publication of Sweet et al. (2017) resulted in an AGU presentation by DB (see https://par.nsf.gov/servlets/purl/10066643), and a written summary of our engagement (see https://acwi.gov/climate_wkg/minutes/final_agu_consensus_statement_probabilisitic_projections_dec_2017.pdf), both of which reflected a useful integration of our diversity of perspectives as scientists and practitioners.
The authors should remove language stating or implying a reliance of Sweet et al. (2017) on DeConto and Pollard (2016). That would be a good first step in helping the paper be considered for publication. Note that I do not, in any way, have any objection to the authors disagreeing with the decision in Sweet et al. (2017) to use 2.5m globally by 2100 as the top-end, bounding scenario on other grounds. Such a disagreement would simply have to be justified in terms of the totality of references and lines of evidence summarized above, absent any reliance on DeConto and Pollard, as well as the stated purpose of the use of a limiting upper-bound scenario in that report - in other words, the choice to include 2.5m not because it is at all likely, but precisely because it is very, very unlikely.
Finally, while my main concern is helping the authors correct this particular error, I do also largely agree with the criticisms outlined in Community Comment 1 (CC1: '"Actionable" for whom, in what decision context?', Robert Kopp, 15 Mar 2024). It would be good to see the authors respond to and/or address those in their revision.
I appreciate the authors spending the time and effort to grapple with these issues in the literature. I continue to be very supportive of having these types of issues and ideas discussed, and I believe the continuation of the dialogue through this paper is valuable.
Citation: https://doi.org/10.5194/egusphere-2024-534-RC1 -
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC1-supplement.pdf
-
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
-
RC1: 'Reply on CC2', Chris P. Weaver, 09 Apr 2024
-
CC3: 'Comment on egusphere-2024-534 Defining the rules so we know when to break them', Rajashree Datta, 11 Apr 2024
Actionable science calls for a higher standard for communication of science, including rating findings for reliability (as suggested by the authors). This is both because (a) even a great single paper typically only addresses a portion of earth system components and because (b) guidelines actually help articulate exceptions.
On the complexity of scientific findings vs. adaptation
Presumably, we would not present decision-makers with non-peer-reviewed SLR estimates and expect them to decide its merit based on the specific decision context. If we accept the current social production of science which is “peer review” (also imperfect), a higher standard for “actionable science” is simply a logical extension.
Keller et al., 2021 (mentioned by Dr. Kopp) specifically discusses the importance of “Linking the Required Disciplines” in the management of climate risk. As an example: climate change introduces both direct impacts of SLR to coastlines and to enhanced weather extremes inland. The impact on weather extremes is not typically the purpose of any singular scientific paper strictly focused, for example, on the impacts of ice sheet loss on SLR. To focus on the potential opportunity cost: If efforts are focused on coastlines now in response to extreme SLR scenario, what of the potential loss of funding to adapting to extreme weather scenarios inland? This problem is more acute in regions with fewer resources than discussed here (New York City), who may benefit from a clearer communication of the scientific consensus.
On guidelines and exceptions
Importantly, the authors do not advocate against limiting novel science, but rather avoiding its misrepresentation, i.e. “It is better to discuss such a claim, including the gaps in the evidence, than to disregard it.”. In another comment, Dr. Kopp suggests (in summary) that it is critical to present the long tail and leave room for a dynamic response, even where evidence is lacking, based on the extent of potential risk (and the associated benefit of more extensive adaptation). I see no meaningful contradiction between the need for a guideline presented by the authors and the presence of exceptions. In fact, the “exceptionality” here is still defined in reference to some guideline and underlying rationale, thus underlining the need for the guideline.
The IPCC acts as a first, but possibly inadequate, level of synthesis. In fact, the purview of the IPCC has expanded over time precisely to accommodate evolving needs. A continued discussion of not just the “what” of actionable science, but also the “why” (the philosophical underpinning, as this paper explores) can inform precisely when it is important to break with consensus and to focus research into novel claims. This is particularly true because, while the presumed audience for IPCC predictions and adaptation recommendations is “decision-makers”, there is a far larger population which needs to understand the philosophy governing adaptation priorities (and has less time to examine footnotes in the IPCC and specific case studies): voters.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC3 -
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC1-supplement.pdf
-
AC2: 'Comment on egusphere-2024-534; reply to Robert Kopp', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC2-supplement.pdf
-
CC5: 'If the authors reject decision-making under deep uncertainty (DMDU), they must do so through critical engagement with the DMDU literature', Robert Kopp, 01 Jun 2024
Unfortunately, the authors’ response to my comment only strengthens my concerns with this manuscript.
In my mind, the core of the authors’ response is the following bold statement:
- "We disagree about the value of low-confidence processes in decision-making. We are comfortable with including low-likelihood processes, if the likelihoods are scientifically supported (i.e., the relevant processes are understood with at least medium confidence). We object, however, to including processes which are so poorly understood that it is not yet possible to make robust, quantitative projections. Giving premature credence to low-confidence processes can lead to misuse of scarce public and private resources and can damage the credibility of the climate science enterprise."
This expands upon the statement in the manuscript that “We suggest that the IPCC medium-confidence sea-level projections are actionable, but not the low-confidence projections that include deeply uncertain processes such as MICI.”
These are bold statements that appear to reject the entire field of decision making under deep uncertainty (DMDU). Scholarly practice would suggest that such a bold claim be made head on – that is, if the authors are to dismiss the entire DMDU literature, they need to cite and engage with that literature in a manner that justifies rejecting decades of scholarly work (and of practice), a relatively recent synthesis of which is provided by
- Marchau, V. A., Walker, W. E., Bloemen, P. J., & Popper, S. W. (2019). Decision making under deep uncertainty: from theory to practice. Springer Nature, 405 pp. https://doi.org/10.1007/978-3-030-05252-2
Contrary to this literature, the authors argue that the only appropriate decision-making practice regarding those elements of sea-level projection characterized by deep uncertainty (or low confidence) is to ignore them. I strongly disagree with this position, but would welcome an argument that makes it while engaging with this robust body of literature.
(The authors may also find it helpful to engage with the new manuscript: Lempert, R., Lawrence, J., Kopp, R., Haasnoot, M., Reisinger, A., Grubb, M., & Pasqualino, R. The Use of Decision Making Under Deep Uncertainty in the IPCC. Frontiers in Climate, 6, 1380054, https://doi.org/10.3389/fclim.2024.1380054).
I wonder, though, whether the authors actually believe their own claim. After all, while probabilistic emissions forecasts are now available and sometimes used, the dominant approach to projecting future emissions remains a scenario-based one, motivated by the deep uncertainty in future technological and policy development. It is certainly the case that there has been – and often remains – broad disagreement among experts regarding technological and policy development processes that would justify applying the label of “low confidence.” As Moss et al. (2010, https://doi.org/10.1038/nature08823) noted in laying out the RCP/SSP framework, "An underlying key issue [and, thus, point of low agreement and therefore low confidence] is whether probabilities can be usefully associated with scenarios or different levels of radiative forcing; for example, the probability that concentrations will stabilize above or below a specified level."
Do the authors think that the issue highlight by Moss et al. must be answered affirmatively -- that is, we must show that probabilities can be usefully associated with emissions scenarios -- for climate projections driven by those scenarios to be actionable? If experts cannot agree on probability distributions regarding policy and technological development, does that moot the actionability of any climate projections -- which is to say, any climate projections beyond about a 20-30 year time horizon – that exhibit substantially sensitivity to these deeply uncertain processes? If so, that somewhat moots the discussion of low-confidence ice-sheet processes, which we have little reason to think might manifest at significant levels until late in the century.
I continue to believe that the authors’ actual objection is not to the use of deeply uncertain information in decision making, but the use of such information with decision frameworks that are designed for probabilistic information.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC5
-
CC5: 'If the authors reject decision-making under deep uncertainty (DMDU), they must do so through critical engagement with the DMDU literature', Robert Kopp, 01 Jun 2024
-
RC2: 'Comment on egusphere-2024-534', Rebecca Priestley, 30 May 2024
General comments
I found this paper very interesting. I am familiar with De Conto and Pollard’s 2016 paper, and the subsequent media coverage, but was not aware of the extent to which these projections were taken on by policymakers and practitioners. This case study aspect of the paper is very interesting and valuable (though I note the corrections advised by Chris P. Weaver in the interactive review). I also found the comments about disciplinary journals vs high impact journals (lines 54-64) particularly valuable.
This has potential to be an important paper, so my feedback is quite detailed with much of it focused on precision of language, to ensure clear and purposeful communication of the argument of the paper.
Specific comments on section 2 of the paper
I have specific comments about language use in section 2 of the paper, firstly around use of the word hypothesis (eg, line68: ‘transform novel hypotheses into accepted knowledge’, line 99: ‘A scientific hypothesis is sufficiently accepted for use in decision-making when it is supported by … etc’ and line 102: ‘peer-reviewed hypotheses must be scrutinized by a diverse group of scientists etc’). I was surprised by the focus on the word 'hypothesis' here. Not all science starts with a hypothesis, and even when it does, this word is usually used to describe what comes at the start of a study, not the end. I would have thought that it’s not the ‘peer-reviewed hypothesis’ that is the ‘actionable’ (or not) part of a research project, or the resulting paper. Rather, it’s the peer reviewed conclusions, claims, findings, or theories. Or, as the quote from Behar says, the ‘data, analysis and forecasts’ (line 25).
In the same section, I also suggest a review of the words ‘viewpoints’, ‘opinion’, and ‘assumptions’. Scientists do, of course, have viewpoints, opinions, and assumptions, but this paper is focused on peer-reviewed published research which (we hope!) relies on evidence and observations that lead to claims and conclusions (even if it doesn’t meet the criteria for actionable research). At the moment the paper could imply that scientists make claims in their published research, or IPCC authors make decisions, based only on opinions and assumptions (which could feed into politically motivated narratives seeking to undermine climate science).
I realise that different disciplines have different norms about language use, but with an interdisciplinary paper like this it’s important that the meaning of this language is accessible to a broad readership. I suggest therefore that language use is reviewed, especially around the words I’ve mentioned here.
Specific comments on section 3 of the paper
The first paragraph of section 3 is important, but is not communicating as clearly as it could. In lines 108-110 I suggest removing reference to ‘land ice’. At the moment, the AIS is listed as an example of ‘land ice’ in one sentence, then the next sentence says it ‘contains marine-grounded ice’. To avoid confusion, but not take away any meaning, the reference to ‘land ice’ could be removed and the more standard separation of SLR contributors into thermal expansion, mountain glaciers, the Greenland Ice Sheet and AIS used (as has been done in line 180). Then, in line 110, which says ‘if melted, this ice could raise sea level by several meters’, it needs to be explicit what ice is being referred to here.
In line 140 it would be useful to provide the figures for the De Conto 2021lowered 21st century SLR contribution, to allow comparison with the DP16 figures.
Line 71: states that IPCC assessments ‘are directed mainly to policymakers but are read by practitioners’ – I suggest that the difference between policymakers and practitioners is teased out in this paper, and more emphasis given to the role of policymakers. The publications referred to in section 3 seem to be interpretations by policymakers, that were then actioned by practitioners. In other parts of the paper, though, the emphasis on practitioners suggests that they are actioning science without this layer of interpretation by policymakers. For example in line 213 is it primarily practitioners or policymakers who need to ‘view novel peer-reviewed claims with caution’? In line 225, is it ‘scientists and practitioners’ who need to work more closely together, or scientists and policymakers?
Technical corrections and points of clarification
Line 27: says the term ‘actionable science’ (which I was not familiar with) has been ‘widely adopted’ but there’s only one citation here. More citations here would strengthen this claim.
Line 30: ‘Our goal is to offer guidance …’ who to? Is this guidance for scientists, practitioners, or both?
Line 38: As a science historian I have to note that the discipline is decades on from the ‘lone genius’ approach, as is much popular science history. This is perhaps a traditional approach, or a twentieth century approach, but I’m not sure it’s right to say ‘often’ when referring to current work.
Lines 66, 67: Mentions first ‘press releases’ and then ‘media accounts’. It would be good to explicitly make the connection between the press releases and the media accounts – while the press releases might cast the work in dramatic light, the media stories often go further, and the headlines (which are not written by the journalists) even further than that, with attention seeking headlines. Do you have a citation for the statement that ‘practitioners typically learn of scientific advances through media coverage’? (line 65)
Line 90: citation and page number needed for this quote
Line 95: Makes an important point, but is it also worth noting that opting for ‘higher ground’ is not necessarily guarding against ‘unknown risks’, it could alternatively (or also) be seen as choosing an option with a longer lifespan, given that sea level rise will continue beyond 2100.
Line 186: what does ‘community’ mean in this context? The scientific community?
Line 213: Is ‘contradict’ the right word here, or would ‘challenge’ be more appropriate?
I look forward to your response. As I said at the start, this is a very interesting paper.
Citation: https://doi.org/10.5194/egusphere-2024-534-RC2 -
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC3-supplement.pdf
-
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
-
RC3: 'Comment on egusphere-2024-534', Anonymous Referee #3, 30 May 2024
In this manuscript, Lipscomb et al. discuss the challenges of providing ‘actionable’ scientific research in the context of climate adaptation. In the manuscript, the authors emphasize the importance of distinguishing between novel hypotheses/claims and actionable science that can be used for decision-making. The authors discuss this (also) within the context of a recent high-impact study projecting rapid sea level rise from the Antarctic ice sheet due to a low-confidence process. Overall, Lipscomb et al. propose (1) an epistemic criterion for determining when scientific claims are actionable, based on multiple lines of high-quality evidence and evaluated by a diverse group of experts, (2) recommendations for scientists and practitioners to improve the use of actionable science in decision-making.
This manuscript has clearly attracted lots of interest and sparked productive discussions (see comment section and follow-up AGU presentation highlighted by Chris Weaver, CC2). The authors replied already quite extensively to the main comments raised by Robert Kopp (CC1) and Chris Weaver (CC2). If the authors are willing to include their reply to CC1 and CC2 in the revised version of the manuscript (in particular, the misstatement about the relevance of DP16 on the Sweet et al. (2017) report), I will consider it ready for publication as is; the manuscript is in general very well written and clearly an excellent fit for The Cryosphere (Brief Communication). I do have a couple of minor comments to add, which the authors can see as a suggestion for the revised manuscript. I do not consider these (minor) comments as strictly required for publication - I rather hope they can contribute to the discussion.
1) In general, I agree with the authors on how the epistemic criterion for actionable science is formulated, and with the recommendations to scientists, journalists, and practitioners laid out in Section 4. Both the criterion and the recommendations largely rely on IPCC reports or meta analyses/community assessment. While this makes sense to me, I’m skeptical about how the criterion and recommendations could be applied in practice, as IPCC reports (or other community assessments) are published at much longer timescales than individual studies, and media are typically very quick to pick up high-impact claims (often at the same time a new study comes out for high-impact journals). Even assuming improved awareness and communication between scientists, journalists, and practitioners in the future (which is one recommendation made by the authors and is certainly something we should aspire to), it looks to me that for a case similar to the one presented by the authors to not happen again much would be left to individual choices (for instance: being cautious when making/dealing with new claims). I am fine if the main goal of the authors is to start a discussion on the topics presented, rather than proposing some examples of practical solutions to implement their recommendations. However, I think it would be of great help to see some (more) critical reflection on the latter. For instance: should it become part of the peer-review process to have reviewers providing some level of confidence and/or rating how much a study can be considered reliable or even suitable for media coverage (using for example a formal rating system similar to the one used to evaluate originality, quality, etc.)?
2) Line 65: ‘practitioners typically learn of scientific advances through media coverage’. I think this is quite an important point of focus in the manuscript - the link between scientific results/media coverage/practitioners. I think however that this sentence is a bit too vague, and it would be good to have reference(s) backing it up. If there aren’t, maybe it could make sense to make the example for the DP16 study, but to avoid generalizing ( ‘typically learn’).
Citation: https://doi.org/10.5194/egusphere-2024-534-RC3 -
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC3-supplement.pdf
-
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
-
CC4: 'Literature on science usability and decision making context', Jeremy Bassis, 01 Jun 2024
This short paper describes a framework for actionable science. I am writing this comment as a glaciologists who also works broadly in the field of adaptation and science usability. I would really like to encourage the authors and editors to expand the paper to full length to give service to the ideas because in the short format currently there is much ambiguity and with significant room for mischief and harm.
To start, the study of how knowledge systems are applied to decision making often goes under the name “science usability”, but I have also seen it referred to as “actionable science”. There is substantial literature about the process of knowledge creation for planning and decision making. Foundational to this field, is the concept that for knowledge to be applied in a decision making or planning concept requires credibility, salience and legitimacy (see, e.g., Cash et al., 2003). To quote directly from Cash et al., (2003) “. . . . credibility involves the scientific adequacy of the technical evidence and arguments. Salience deals with the relevance of the assessment to the needs of decision makers. Legitimacy reflects the perception that the production of information and technology has been respectful of stakeholders' divergent values and beliefs, unbiased in its conduct, and fair in its treatment of opposing views and interests.”
The definition of “actionable” that the authors present only includes what would be traditionally called “credibility”, omitting salience and legitimacy completely. This is a fatal omission from a usability standpoint because there is abundant, credible literature that demonstrates the importance of salience and legitimacy (e.g., Cash, 2003). To give two examples of salience, communities adapt to local sea level not global sea level and thus global sea level rise might have low salience and thus low usability for many decision makers. Or century and longer sea level rise projections have little salience for, say 30 year home mortgages. Legitimacy is also important as it impacts the process and values of science creation and there are numerous examples where low legitimacy jeopardized science usability.
Clearly usability depends on the specific decisions and the community making decisions and, as noted by another commenter, usability cannot be divorces from this context: actionable in one decision context does not mean actionable in a different decision context. We also know that the way to increase the usability of knowledge is through co-generation, but that this is inefficient. Usually it is knowledge brokers and boundary organizations that play the role of interpreters and translators of knowledge to users. There is, again, abundant literature in this field and it is a shame to have no dialogue with the many previous studies that identify criteria AND the process of usable knowledge production.
My view is that the present study has *much* more value if this study is reframed more specifically around credibility rather than more broadly about usability/actionable. I would even recommend switching out the term "actionable" for "credible" to better reflect existing terminology. Even here, I would have really like the authors to examine the role of values and tradeoffs in defining credibility, especially recognizing that scientific uncertainty can often be weaponized to delay action.
Determining if a study has multiple lines of evidence to support it is, as we can see from the comments, subjective and the authors have not demonstrated that their definition improves decision making. As the current paper reads, it seems more like it is specifically designed as a refutal of DeConto and Pollard’s (2016) projections and the impact they have had as a high-end scenario in planning and adaptation decisions rather than as a general framework. If that is the goal then the study should, perhaps be framed as such.
To give some additional examples, the explosive disintegration of the Larsen B ice shelf and subsequent acceleration of tributary was not anticipated by any models nor was the sudden acceleration and retreat of Jakobshavn, Pine Island or Thwaites Glacier. The disintegration of Conger ice shelf was also a surprise, although perhaps shouldn’t have been. Going beyond glaciology, the pre-eminent physicists of the time Lord Kelvin famously said that “Heavier than air flying machines are impossible” a mere 8 years before the first airplane successfully flew (Shoemaker, 1995). The Harvard Economic Society announced that “A severe depression like that of 1920-1921 is outside the range of probability” on November 16, 1929 (Shoemaker, 1995). I could go on with other examples in which real world events failed to adhere to the academic consensus. Here the point isn’t that scientists can be wrong (of course we are!), but that how communities, stakeholders and decision makers choose to incorporate information depends heavily dependent on values, resources and objectives. Would we better off stress testing, strategizing or planning to deal with low probability, high impact events? I don’t know and I I would be loath for scientists to insist on being on the arbiter of these value laden decisions. Actionable doesn't always mean infrastructure and it might mean longer term strategic thinking to be better positioned to incorporate new information as it becomes available. Perhaps one useful framing for the authors is that their approach might be most suited to mega infrastructure investments that are expensive and take decades to plan and build. Different criteria would be appropriate for different scales of intervention.
My penultimate point, illustrates the degree of mischief that is possible if terminology and definitions aren’t tightened up and illustrated with multiple case studies. The authors argue that DeConto and Pollard’s study of sea level rise isn’t credible (actionable in their words) because there isn’t multiple lines of evidence to support ice cliff instability. I suspect that the paleo-record results in more ambiguity than the authors concede here, but nonetheless this is a fair point. However, we can also say that there is zero evidence that the calving front will remain stationary and very little evidence to support *any* calving law used in ice sheet modeling. By the authors same logic, we might conclude that *none* of the projections of sea level rise are credible (actionable in the authors words). We can apply this same reasoning to other processes. Should we doubt the entire enterprise of climate modeling because of the quasi-empirical treatment of clouds , precipitation and aerosols? Clearly, this is not what the authors intend, but it might be an unintentional side effect if not clarified. To this end, the IPCC, NOAA, NASA Sea level team and multiple organizations are involved in summarizing literature and play a well documented role in determining “credibility” and these organizations form one step in a chain of boundary organizations. Here there needs to be dialogue with the role played by these existing structures and organizations and the interplay with other boundary organizations.
My final point relates to the definition of “action”. As noted by Knaggard (2014), one of the common actions that stakeholders take is decide that the scientific uncertainty is currently too large and devote resources to research to better quantify and hopefully reduce uncertainties. By this definition, the action associated with high impact/low probability impacts might be more research (potentially coupled with adaptive decision making). However, the most common action that decision makers take, however, is simply to do what is currently politically and technically feasible (Knaggard, 2014). These decisions, based on political expediency and co-benefits depend more on the solution space than scientific uncertainty and so establishing credibility is less important than salience and legitimacy.
I’m very sympathetic to the authors goals, but I’m not sure that, to use the authors definition, their definition of "actionable" has multiple, independent lines of evidence to support its use and adoption. I think it would be hard to demonstrate this in the limited space available, but I think it would be a great addition if the authors built their case through a larger number of case studies and literature review and applied their own criteria of multiple lines of evidence to their own proposed definition of actionable science.
Refs
Cash, David W., et al. "Knowledge systems for sustainable development." Proceedings of the national academy of sciences 100.14 (2003): 8086-8091. And many more on usability!Schoemaker, Paul JH. "Scenario planning: a tool for strategic thinking." MIT Sloan Management Review (1995).
Åsa Knaggård (2014) What do policy-makers do with scientific uncertainty? The incremental character of Swedish climate change policy-making, Policy Studies, 35:1, 22-39, DOI: 10.1080/01442872.2013.804175
All the best,
Jeremy BassisCitation: https://doi.org/10.5194/egusphere-2024-534-CC4 -
CC6: 'Comment on egusphere-2024-534', Judy Lawrence, 06 Jun 2024
Comment on Lipscomb, Behar, Morrison Brief Communication
By Judy Lawrence, Marjolijn Haasnoot, Robert Lempert
We are researchers and science brokers who work with SLR science in the context of decision making under uncertainty (DMDU). We recognize the challenge of dealing with multiple scenarios that are frequently updated as new understanding emerges. However, the proposal set out by LBM runs the risk that we wait until there is certainty before taking adaptation action. This would also not meet the precautionary test in the UNFCC which underpins the Paris Agreement, and lead to adaptation decision delay.
There are approaches that have emerged and assessed in IPCC AR6 for considering the dilemma that decision makers find themselves in as new science continues to emerge and decisions have to be made under uncertainty over the trajectory and pace of change. These DMDU approaches and methods are able to deal with large uncertainties and updated (climate) information. The LBM paper mentions adaptive planning and integrating SLR information into planning but does not elaborate what this means or how these approaches can address the problem of over and under investment in adaptation. This omission leads to a very directive solution that would not be decision relevant (salient nor legitimate).
Furthermore, we agree with Bassis's comment that the LBM paper only focuses on the matter of credibility and that salience and legitimacy are critical for decision making under uncertainty to enable local context to inform decisions. The response from LBM that the paper is not about how the science is used, misses the point that science does not sit in a vacuum outside how it may be used. In fact the authors have defined their problem with an example of how the projections were ‘misused’ in the San Francisco example. Science and its use are inextricably linked.
One cannot set a fixed criteria/standard for a changing context (both science and societal values-which decision makers weigh up). The question of the standard of the science cannot be divorced from a discussion about how sea-level rise projections are used. Using DMDU methods such as Dynamic Adaptive Pathways Planning (DAPP; Haasnoot et al. 2019) or Robust Decision Making (Lempert et al. 2019) to stress test adaptation options against a range of scenarios gives decision makers an idea of how sensitive each strategy is and then this information can be weighed against the other considerations that the decisionmakers must take into account as representatives of their communities, now and in the future. Given that surprises around polar ice processes and feedback are occurring, even high-end/low confidence scenarios are useful to remind decision makers that one cannot rule out such outcomes, as also stated in IPCC AR6. A DAPP approach enables adaptation to be broken down into near-term actions and mid-to long-term options to avoid lock in of investments that are costly to adjust in the future and which shift the adaptation costs to future generations. It helps to prepare and keep options open or create them through innovation and planning, allowing further adaptation if necessary. DAPP thus enables more robust decisions to be made and contingency actions to be ready.
Given the time it takes for long lived infrastructure projects to be designed and implemented, especially for coastal adaptation, to high and rapid sea level rise, a precautionary approach has merit. Not considering such scenarios can run the risk of being too late or invest in the wrong measures resulting in high sunk costs and transfer costs. Considering them on the other hand gives decision makers greater confidence in a changing situation (science and societal values).
The nature of the decision process is lightly addressed in the paper beyond one example. A range of approaches as to how SLR projections can be used should be proffered. These can be found in the literature.
For example, in the Netherlands a group of experts (the Signal Group) advise the government on relevant new research which is then assessed for its potential implication and need for further research or actions (Haasnoot et al. 2018). The follow-up research from Deltares and KNMI lead to further assessment on the need to reassess the adaptation strategy. It also raised awareness of the long-term higher sea levels, even if they were in low global warming scenarios, and that the current adaptive plan would not be sufficient, and that transformative adaptation would be needed. While near-term actions were not changed immediately, it was recognised that preparations were needed to be able to further adapt, if such an accelerated SLR became a reality.
In New Zealand, revised national coastal hazards and climate change guidance (Ministry for the Environment 2024) has adopted a tailored approach for different types of decisions, using DAPP to stress test adaptation options with downscaled global scenarios that enable polar ice responses and vertical land movement to be incorporated into SLR which vary around the coast. Periodic revisions are undertaken to reflect new science and a precautionary approach is adopted considering at least a 100year timeframe to account for change and uncertainty. Specific infrastructure guidance is also available (Lawrence and Allison, 2024).
Adaptive pathways planning and monitoring for signals of is also done in practice in the Thames Estuary plan (Environment Agency, 2012; Ranger et al 2013) and New York (Blake et al., 2019; Rozenzweig). They also plan regular ongoing assessment and evaluation of changes (e.g. ongoing and every 6 six years in New York and in the Netherlands).
References
Environment Agency. (2012). TE2100 Plan. Managing Flood Risk Through London and the Thames Estuary.
Haasnoot, M., van ’t Klooster, S., & van Alphen, J. (2018). Designing a monitoring system to detect signals to adapt to uncertain climate change. Global Environmental Change, 52, 273–285. https://doi.org/https://doi.org/10.1016/j.gloenvcha.2018.08.003
Haasnoot, M., Warren, A., & Kwakkel, J. H. (2019a). Dynamic Adaptive Policy Pathways (DAPP) - Decision Making under Deep Uncertainty: From Theory to Practice (V. A. W. J. Marchau, W. E. Walker, P. J. T. M. Bloemen, & S. W. Popper, Eds.; pp. 71–92). Springer International Publishing. https://doi.org/10.1007/978-3-030-05252-2_4
Lawrence, J., & Allison, A. (2024). Guidance on adaptive infrastructure decision-making for addressing compound climate change impacts. https://deepsouthchallenge.co.nz/wp-content/uploads/2024/04/Guidance-on-adaptive-decision-making-for-addressing-compound-climate-change-impacts-for-infrastructure.pdf
Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climatic Change, 117(4), 627–646. https://doi.org/10.1007/s10584-012-0574-6
Ministry for the Environment. 2024. Coastal hazards and climate change guidance. Wellington: Ministry for the Environment. Pub. 1805 https://environment.govt.nz/assets/publications/Coastal-hazards-and-climate-change-guidance-2024-ME-1805.pdf
Ranger, N., Reeder, T., & Lowe, J. (2013). Addressing `deep’ uncertainty over long-term climate in major infrastructure projects: four innovations of the Thames Estuary 2100 Project. EURO Journal on Decision Processes, 1(3), 233–262. https://doi.org/10.1007/s40070-013-0014-5
Citation: https://doi.org/10.5194/egusphere-2024-534-CC6 -
AC4: 'Comment on egusphere-2024-534', William Lipscomb, 24 Oct 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC4-supplement.pdf
Status: closed
-
CC1: '"Actionable" for whom, in what decision context?', Robert Kopp, 15 Mar 2024
I read this brief comment with interest, and found some core issues troubling.
Fundamentally, the authors discuss 'actionable' science, but they discuss it stripped of context. Actions are defined by the American Heritage Dictionary as 'organized activity to accomplish an objective'. Science cannot be judged to be actionable, or not, outside the context of an organized activity and an objective. It makes little sense to talk about something being 'actionable' in general, outside of a specific decision context.
The authors neglect the extensive literature on decision science and risk analysis relevant to using sea-level projections in adaptation decision making. For a relatively recent review, see Keller, K., Helgeson, C., & Srikrishnan, V. (2021). Climate risk management. Annual Review of Earth and Planetary Sciences, 49, 95-116, https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-080320-055847.
In the specific context of communicating sea-level uncertainty and ambiguity, the authors should also see Kopp, R. E., Oppenheimer, M., O’Reilly, J. L., Drijfhout, S. S., Edwards, T. L., Fox-Kemper, B., ... & Xiao, C. (2023). Communicating future sea-level rise uncertainty and ambiguity to assessment users. Nature climate change, 13(7), 648-660, https://www.nature.com/articles/s41558-023-01691-8. Given the direct relevance, this latter omission is particularly surprising.
Why do the organized activity and the objective matter?
Broadly, high-end sea-level rise scenarios, including low-confidence processes, are valuable in flexible, adaptive decision-making. This is shown by a number of papers, but perhaps most clearly and directly for this context in a preprint by Feng et al. (https://doi.org/10.22541/essoar.170914510.03388005/v1 ).
Among other analyses, Feng et al. compare idealized protection schemes for Manhattan under (1) a static optimal approach, where a single sea wall elevation must be picked based on available knowledge today, and (2) a variety of dynamic approaches, where sea wall height can be periodically adjusted based on new information. (I focus particularly on the 'reinforcement learning' approach described therein).
They consider two cases where projects are planned under inaccurate sea-level rise projections: (A) where planning takes place under the SSP5-8.5 low-confidence projections but the reality corresponds to SSP2-4.5 medium-confidence projections, and (B) where planning takes place under the SSP2-4.5 medium-confidence projections by reality corresponds to the SSP5-8.5 low-confidence projections.
In the former case -- where high-end projections are used and reality underperforms -- the expected net present value cost is $2.3 billion, $1.0 billion more than with the correct (lower) distribution, if a static approach is taken. With a flexible approach, the expected net present value cost is $1.0 billion, just $0.1 billion more than if the correct distribution is chosen.
However, in the latter case -- where middle-of-the-road projections are used and reality overperforms -- the expected net present value cost is $15 billion, $12 billion more than with the correct (high-end) distribution if a static optimal approach is taken. With a flexible approach, the expected net present value cost is $3.9 billion, $0.9 billion more than if the high-end distribution had been used.
Costs associated with mis-estimating sea-level rise (Expected NPV, $ billion) Static plan Dynamic plan Overestimated sea-level distribution (use SSP5-8.5 LC, get SSP2-4.5 MC) $1.0 $0.1 Underestimated sea-level distribution (use SSP2-4.5 MC, get SSP5-8.5 LC) $11.6 $2.4 Thus, with a dynamic approach, using high-end projections that capture low-confidence processes makes a lot of economic sense. Such an approach cuts of the tail risk at relatively small additional cost. (In fact, the cost of a static optimal approach using the correct distribution in a middle-of-the-road world is more than the cost of using a dynamic approach with the overestimated, high-end distribution.)
However, with a static approach, the costs of getting the distribution wrong are more substantial (though an order of magnitude larger if the distribution is underestimated than if it is overestimated).
In truth, I think the concern the authors address is not one with scientists offering practitioners low-confidence, high-end projections as part of the domain of plausible futures. It is with how these projections are then used.
As the Feng et al. analysis, and others, indicate, the most economic approach given substantial uncertainty and ambiguity is most often the dynamic one. Where a static approach must be used, whether due to inability to undertake a dynamic approach or regulatory inflexibility, then benefit-cost theory tells us what needs to be taken into account in order to determine the best option. This includes:
1) The benefit in terms of reduced risk associated with choosing different adaptation levels
2) The cost in terms of additional adaptive expenditures in terms of choosing different adaptation levels
3) The discount rate used to tradeoff present adaptation costs and future harms
4) The risk aversion that determines how much weight is given to the high-end of the cost distribution
5) The ambiguity aversion that determines how much weight is given to different alternative probability distribution for sea level and thus for cost.
Where the costs and benefits of adaptation are comparable, discomfort will arise if regulatory guidance specifies a single adaptation target stripped of context, because a user's risk and ambiguity aversion applies to both the costs and benefits of adaptation, not just the benefits.
I suspect that the authors' concern with the actionability of projections incorporating low-confidence processes is misaimed. Given appropriately flexible decision frameworks, as Feng et al. show, we are better off incorporating such high-end projections.
Regulations that rigidly prescribe the use of specific high-end projections in static contexts, however, run the risk of leading to sub-optimal outcomes. It may be appropriate for policy to set discount, risk aversion, and ambiguity aversion levels for specific contexts; this is a matter where different political philosophies will lead to different judgements. However, given these parameters, identifying the benefit-cost optimal outcome requires considering the net value of adaptation benefits and adaptation costs under these parameters. If costs and benefits are comparable, overly rigid targets might cut off the long tail of sea-level harms but create a long tail of adaptation cost overruns.
In short, the authors have chosen the wrong target. Scientists should strive to communicate not just projections that incorporate processes for which there is a high degree of evidence, but also processes that are of potentially great significance but less agreement and evidence -- as AR6 has done. It is, however, important that actions be guided by decision frameworks that correctly reflect the nature of the information provided.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC1 -
CC2: 'Comment on egusphere-2024-534', Chris P. Weaver, 18 Mar 2024
I appreciate the opportunity to comment, since I would like to point out, and help correct, an error in the paper.
Specifically, on page 6, the authors have made the following statement about the influence of DeConto and Pollard (2016) on the U.S. interagency sea level rise scenarios report of Sweet et al. (2017) (my emphasis):
“To be consistent with “recent updates to the peer-reviewed scientific literature”, they issued an “Extreme” global mean sea-level projection of 2.5 m by 2100 for RCP8.5, exceeding the previous upper bound of 2.0 m based on Pfeffer et al. (2008). Their Extreme projection relied on a large AIS contribution, based primarily on DP16. [Footnote 4]
[Footnote 4] The 2.0 m upper bound of Pfeffer et al. (2008) assumed large contributions from both ice sheets: 0.54 m from the GrIS and 0.62 m from the AIS. In AR5, Church et al. (2013) estimated a likely upper bound of just 0.21 m for the GrIS, since process models do not support “the order of magnitude increase in flow” in Pfeffer et al. (2008). To reach an upper bound of 2.0 m or more, S17 therefore needed an increased AIS contribution of ∼1.0 m or more. To support this increase, they cited DP16 along with the expert-judgment assessment of Bamber and Aspinall (2013). However, the latter study gave a high-end (95th percentile) estimate of 0.84 m SLR from the two ice sheets, less than Pfeffer et al. (2008). Other studies cited by S17 did not give independent evidence of a large AIS contribution. Thus, both the “High” projection of 2.0 m and the “Extreme” projection of 2.5 m in S17 relied on DP16’s claim that the AIS could contribute at least a meter of SLR by 2100."
The highlighted statement is a misstatement of fact. The accompanying footnote is also erroneous, as well as the part of the statement, on page 7, that references DeConto and Pollard (2016) in the context of Sweet et al. (2017), i.e., “(2) S17 and other reports that were published in 2016–2020 and relied on DP16 for the AIS contribution.”
Here is the relevant paragraph from page 14 in Sweet et al. (2017):
“The growing evidence of accelerated ice loss from Antarctica and Greenland only strengthens an argument for considering worst-case scenarios in coastal risk management. Miller et al. (2013) and Kopp et al. (2014) discuss several lines of arguments that support a plausible worst-case GMSL rise scenario in the range of 2.0 m to 2.7 m by 2100: (1) The Pfeffer et al. (2008) worst-case scenario assumes a 30-cm GMSL contribution from thermal expansion. However, Sriver et al. (2012) find a physically plausible upper bound from thermal expansion exceeding 50 cm (an additional ~20-cm increase). (2) The ~60 cm maximum contribution by 2100 from Antarctica in Pfeffer et al. (2008) could be exceeded by ~30 cm, assuming the 95th percentile for Antarctic melt rate (~22 mm/year) of the Bamber and Aspinall (2013) expert elicitation study is achieved by 2100 through a linear growth in melt rate. (3) The Pfeffer et al. (2008) study did not include the possibility of a net decrease in land-water storage due to groundwater withdrawal; Church et al. (2013) find a likely land-water storage contribution to 21st century GMSL rise of -1cm to +11 cm. Thus, to ensure consistency with the growing number of studies supporting upper GMSL bounds exceeding Pfeffer et al. (2008)’s estimate of 2.0 m by 2100 (Sriver et al., 2012; Bamber and Aspinall, 2013; Miller et al., 2013; Rohling et al., 2013; Jevrejeva et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Kopp et al., 2014) and the potential for continued acceleration of mass loss and associated additional rise contributions now being modeled for Antarctica (e.g., DeConto and Pollard, 2016), this report recommends a revised worst-case (Extreme) GMSL rise scenario of 2.5 m by 2100.”
In developing the report, we wished to provide a number of scenarios that could be used to fully bracket the evidence base for the physically possible 21-st century sea level rise, as well as providing expert judgement about the central tendency/best guess trajectory. These ranged from 0.3 m at the lowest of the lower bounds to 2.5 m at the uppermost of the upper bounds. Briefly, the motivation was to support as wide a possible range of decision contexts as existed at the time in coastal risk planning and management (e.g., see Hinkel et al., 2015, Nature Climate Change, and many others), including long-term adaptation pathways approaches and “stress test” type applications, both of which often use a “not-to-be-exceeded,” upper bound metric of performance.
As described in the quoted paragraph from Sweet et al. (2017), above, we arrived at the 2.5 m upper bound by synthesizing a number of lines of evidence from numerous studies, as well as the IPCC AR5, to individually interrogate the physically possible ranges of the contributing components to global-mean sea level rise. This was new evidence, and/or new synthesis of that evidence, since Pfeffer et al. (2008), the study that helped define the physically possible upper bound for a preceding U.S. interagency sea level rise scenarios report (Parris et al., 2012).
All of these studies predated the publication of DeConto and Pollard (2016); we had already decided on the 2.5 m upper bound, and completed most of the work of developing the global and regional scenarios, before that paper was published. As just one example, Kopp et al. (2014) estimated 2.45 m as the 99.9 percentile outcome for global-mean sea level rise in 2100 under RCP8.5. Once DeConto and Pollard (2016) was published, we added it to our citation list as another piece of evidence, but the conclusions of that paper had no influence on our choice of 2.5 m as the upper bounding scenario. The successor report to Sweet et al. (2017), i.e., Sweet et al. (2022), stated this clearly, as well (e.g., see page 11): “In Sweet et al. (2017), these scenarios were developed to span a range of 21st-century GMSL rise from 0.3 m to 2.5 m. Sweet et al. (2017) built these scenarios upon the probabilistic emissions scenario–driven projections of Kopp et al. (2014).”
The bottom line is that, if DeConto and Pollard (2016) had never been published, we would have written exactly the same report at the time that we wrote it.
In closing, I wanted to note that, on the initiative of one of the authors of this brief communication (DB), he and a number of others of us (including myself and Kopp, as well as DeConto) spent substantial time in productive discussions of the very points I have just summarized, and related topics, in the broader context of the nuances of using cutting-edge sea level rise science to support decision-making. These extensive discussions following the publication of Sweet et al. (2017) resulted in an AGU presentation by DB (see https://par.nsf.gov/servlets/purl/10066643), and a written summary of our engagement (see https://acwi.gov/climate_wkg/minutes/final_agu_consensus_statement_probabilisitic_projections_dec_2017.pdf), both of which reflected a useful integration of our diversity of perspectives as scientists and practitioners.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC2 -
RC1: 'Reply on CC2', Chris P. Weaver, 09 Apr 2024
[Converting my community comment into a formal review comment, at the Editor's request]
I appreciate the opportunity to comment, since I would like to point out, and help correct, an error in the paper.
Specifically, on page 6, the authors have made the following statement about the influence of DeConto and Pollard (2016) on the U.S. interagency sea level rise scenarios report of Sweet et al. (2017) (my emphasis):
“To be consistent with “recent updates to the peer-reviewed scientific literature”, they issued an “Extreme” global mean sea-level projection of 2.5 m by 2100 for RCP8.5, exceeding the previous upper bound of 2.0 m based on Pfeffer et al. (2008). Their Extreme projection relied on a large AIS contribution, based primarily on DP16. [Footnote 4]
[Footnote 4] The 2.0 m upper bound of Pfeffer et al. (2008) assumed large contributions from both ice sheets: 0.54 m from the GrIS and 0.62 m from the AIS. In AR5, Church et al. (2013) estimated a likely upper bound of just 0.21 m for the GrIS, since process models do not support “the order of magnitude increase in flow” in Pfeffer et al. (2008). To reach an upper bound of 2.0 m or more, S17 therefore needed an increased AIS contribution of ∼1.0 m or more. To support this increase, they cited DP16 along with the expert-judgment assessment of Bamber and Aspinall (2013). However, the latter study gave a high-end (95th percentile) estimate of 0.84 m SLR from the two ice sheets, less than Pfeffer et al. (2008). Other studies cited by S17 did not give independent evidence of a large AIS contribution. Thus, both the “High” projection of 2.0 m and the “Extreme” projection of 2.5 m in S17 relied on DP16’s claim that the AIS could contribute at least a meter of SLR by 2100."
The highlighted statement is a misstatement of fact. The accompanying footnote is also erroneous, as well as the part of the statement, on page 7, that references DeConto and Pollard (2016) in the context of Sweet et al. (2017), i.e., “(2) S17 and other reports that were published in 2016–2020 and relied on DP16 for the AIS contribution.”
Here is the relevant paragraph from page 14 in Sweet et al. (2017):
“The growing evidence of accelerated ice loss from Antarctica and Greenland only strengthens an argument for considering worst-case scenarios in coastal risk management. Miller et al. (2013) and Kopp et al. (2014) discuss several lines of arguments that support a plausible worst-case GMSL rise scenario in the range of 2.0 m to 2.7 m by 2100: (1) The Pfeffer et al. (2008) worst-case scenario assumes a 30-cm GMSL contribution from thermal expansion. However, Sriver et al. (2012) find a physically plausible upper bound from thermal expansion exceeding 50 cm (an additional ~20-cm increase). (2) The ~60 cm maximum contribution by 2100 from Antarctica in Pfeffer et al. (2008) could be exceeded by ~30 cm, assuming the 95th percentile for Antarctic melt rate (~22 mm/year) of the Bamber and Aspinall (2013) expert elicitation study is achieved by 2100 through a linear growth in melt rate. (3) The Pfeffer et al. (2008) study did not include the possibility of a net decrease in land-water storage due to groundwater withdrawal; Church et al. (2013) find a likely land-water storage contribution to 21st century GMSL rise of -1cm to +11 cm. Thus, to ensure consistency with the growing number of studies supporting upper GMSL bounds exceeding Pfeffer et al. (2008)’s estimate of 2.0 m by 2100 (Sriver et al., 2012; Bamber and Aspinall, 2013; Miller et al., 2013; Rohling et al., 2013; Jevrejeva et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Kopp et al., 2014) and the potential for continued acceleration of mass loss and associated additional rise contributions now being modeled for Antarctica (e.g., DeConto and Pollard, 2016), this report recommends a revised worst-case (Extreme) GMSL rise scenario of 2.5 m by 2100.”
In developing the report, we wished to provide a number of scenarios that could be used to fully bracket the evidence base for the physically possible 21-st century sea level rise, as well as providing expert judgement about the central tendency/best guess trajectory. These ranged from 0.3 m at the lowest of the lower bounds to 2.5 m at the uppermost of the upper bounds. Briefly, the motivation was to support as wide a possible range of decision contexts as existed at the time in coastal risk planning and management (e.g., see Hinkel et al., 2015, Nature Climate Change, and many others), including long-term adaptation pathways approaches and “stress test” type applications, both of which often use a “not-to-be-exceeded,” upper bound metric of performance.
As described in the quoted paragraph from Sweet et al. (2017), above, we arrived at the 2.5 m upper bound by synthesizing a number of lines of evidence from numerous studies, as well as the IPCC AR5, to individually interrogate the physically possible ranges of the contributing components to global-mean sea level rise. This was new evidence, and/or new synthesis of that evidence, since Pfeffer et al. (2008), the study that helped define the physically possible upper bound for a preceding U.S. interagency sea level rise scenarios report (Parris et al., 2012).
All of these studies predated the publication of DeConto and Pollard (2016); we had already decided on the 2.5 m upper bound, and completed most of the work of developing the global and regional scenarios, before that paper was published. As just one example, Kopp et al. (2014) estimated 2.45 m as the 99.9 percentile outcome for global-mean sea level rise in 2100 under RCP8.5. Once DeConto and Pollard (2016) was published, we added it to our citation list as another piece of evidence, but the conclusions of that paper had no influence on our choice of 2.5 m as the upper bounding scenario. The successor report to Sweet et al. (2017), i.e., Sweet et al. (2022), stated this clearly, as well (e.g., see page 11): “In Sweet et al. (2017), these scenarios were developed to span a range of 21st-century GMSL rise from 0.3 m to 2.5 m. Sweet et al. (2017) built these scenarios upon the probabilistic emissions scenario–driven projections of Kopp et al. (2014).”
The bottom line is that, if DeConto and Pollard (2016) had never been published, we would have written exactly the same report at the time that we wrote it.
In closing, I wanted to note that, on the initiative of one of the authors of this brief communication (DB), he and a number of others of us (including myself and Kopp, as well as DeConto) spent substantial time in productive discussions of the very points I have just summarized, and related topics, in the broader context of the nuances of using cutting-edge sea level rise science to support decision-making. These extensive discussions following the publication of Sweet et al. (2017) resulted in an AGU presentation by DB (see https://par.nsf.gov/servlets/purl/10066643), and a written summary of our engagement (see https://acwi.gov/climate_wkg/minutes/final_agu_consensus_statement_probabilisitic_projections_dec_2017.pdf), both of which reflected a useful integration of our diversity of perspectives as scientists and practitioners.
The authors should remove language stating or implying a reliance of Sweet et al. (2017) on DeConto and Pollard (2016). That would be a good first step in helping the paper be considered for publication. Note that I do not, in any way, have any objection to the authors disagreeing with the decision in Sweet et al. (2017) to use 2.5m globally by 2100 as the top-end, bounding scenario on other grounds. Such a disagreement would simply have to be justified in terms of the totality of references and lines of evidence summarized above, absent any reliance on DeConto and Pollard, as well as the stated purpose of the use of a limiting upper-bound scenario in that report - in other words, the choice to include 2.5m not because it is at all likely, but precisely because it is very, very unlikely.
Finally, while my main concern is helping the authors correct this particular error, I do also largely agree with the criticisms outlined in Community Comment 1 (CC1: '"Actionable" for whom, in what decision context?', Robert Kopp, 15 Mar 2024). It would be good to see the authors respond to and/or address those in their revision.
I appreciate the authors spending the time and effort to grapple with these issues in the literature. I continue to be very supportive of having these types of issues and ideas discussed, and I believe the continuation of the dialogue through this paper is valuable.
Citation: https://doi.org/10.5194/egusphere-2024-534-RC1 -
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC1-supplement.pdf
-
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
-
RC1: 'Reply on CC2', Chris P. Weaver, 09 Apr 2024
-
CC3: 'Comment on egusphere-2024-534 Defining the rules so we know when to break them', Rajashree Datta, 11 Apr 2024
Actionable science calls for a higher standard for communication of science, including rating findings for reliability (as suggested by the authors). This is both because (a) even a great single paper typically only addresses a portion of earth system components and because (b) guidelines actually help articulate exceptions.
On the complexity of scientific findings vs. adaptation
Presumably, we would not present decision-makers with non-peer-reviewed SLR estimates and expect them to decide its merit based on the specific decision context. If we accept the current social production of science which is “peer review” (also imperfect), a higher standard for “actionable science” is simply a logical extension.
Keller et al., 2021 (mentioned by Dr. Kopp) specifically discusses the importance of “Linking the Required Disciplines” in the management of climate risk. As an example: climate change introduces both direct impacts of SLR to coastlines and to enhanced weather extremes inland. The impact on weather extremes is not typically the purpose of any singular scientific paper strictly focused, for example, on the impacts of ice sheet loss on SLR. To focus on the potential opportunity cost: If efforts are focused on coastlines now in response to extreme SLR scenario, what of the potential loss of funding to adapting to extreme weather scenarios inland? This problem is more acute in regions with fewer resources than discussed here (New York City), who may benefit from a clearer communication of the scientific consensus.
On guidelines and exceptions
Importantly, the authors do not advocate against limiting novel science, but rather avoiding its misrepresentation, i.e. “It is better to discuss such a claim, including the gaps in the evidence, than to disregard it.”. In another comment, Dr. Kopp suggests (in summary) that it is critical to present the long tail and leave room for a dynamic response, even where evidence is lacking, based on the extent of potential risk (and the associated benefit of more extensive adaptation). I see no meaningful contradiction between the need for a guideline presented by the authors and the presence of exceptions. In fact, the “exceptionality” here is still defined in reference to some guideline and underlying rationale, thus underlining the need for the guideline.
The IPCC acts as a first, but possibly inadequate, level of synthesis. In fact, the purview of the IPCC has expanded over time precisely to accommodate evolving needs. A continued discussion of not just the “what” of actionable science, but also the “why” (the philosophical underpinning, as this paper explores) can inform precisely when it is important to break with consensus and to focus research into novel claims. This is particularly true because, while the presumed audience for IPCC predictions and adaptation recommendations is “decision-makers”, there is a far larger population which needs to understand the philosophy governing adaptation priorities (and has less time to examine footnotes in the IPCC and specific case studies): voters.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC3 -
AC1: 'Comment on egusphere-2024-534; reply to Christopher Weaver', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC1-supplement.pdf
-
AC2: 'Comment on egusphere-2024-534; reply to Robert Kopp', William Lipscomb, 29 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC2-supplement.pdf
-
CC5: 'If the authors reject decision-making under deep uncertainty (DMDU), they must do so through critical engagement with the DMDU literature', Robert Kopp, 01 Jun 2024
Unfortunately, the authors’ response to my comment only strengthens my concerns with this manuscript.
In my mind, the core of the authors’ response is the following bold statement:
- "We disagree about the value of low-confidence processes in decision-making. We are comfortable with including low-likelihood processes, if the likelihoods are scientifically supported (i.e., the relevant processes are understood with at least medium confidence). We object, however, to including processes which are so poorly understood that it is not yet possible to make robust, quantitative projections. Giving premature credence to low-confidence processes can lead to misuse of scarce public and private resources and can damage the credibility of the climate science enterprise."
This expands upon the statement in the manuscript that “We suggest that the IPCC medium-confidence sea-level projections are actionable, but not the low-confidence projections that include deeply uncertain processes such as MICI.”
These are bold statements that appear to reject the entire field of decision making under deep uncertainty (DMDU). Scholarly practice would suggest that such a bold claim be made head on – that is, if the authors are to dismiss the entire DMDU literature, they need to cite and engage with that literature in a manner that justifies rejecting decades of scholarly work (and of practice), a relatively recent synthesis of which is provided by
- Marchau, V. A., Walker, W. E., Bloemen, P. J., & Popper, S. W. (2019). Decision making under deep uncertainty: from theory to practice. Springer Nature, 405 pp. https://doi.org/10.1007/978-3-030-05252-2
Contrary to this literature, the authors argue that the only appropriate decision-making practice regarding those elements of sea-level projection characterized by deep uncertainty (or low confidence) is to ignore them. I strongly disagree with this position, but would welcome an argument that makes it while engaging with this robust body of literature.
(The authors may also find it helpful to engage with the new manuscript: Lempert, R., Lawrence, J., Kopp, R., Haasnoot, M., Reisinger, A., Grubb, M., & Pasqualino, R. The Use of Decision Making Under Deep Uncertainty in the IPCC. Frontiers in Climate, 6, 1380054, https://doi.org/10.3389/fclim.2024.1380054).
I wonder, though, whether the authors actually believe their own claim. After all, while probabilistic emissions forecasts are now available and sometimes used, the dominant approach to projecting future emissions remains a scenario-based one, motivated by the deep uncertainty in future technological and policy development. It is certainly the case that there has been – and often remains – broad disagreement among experts regarding technological and policy development processes that would justify applying the label of “low confidence.” As Moss et al. (2010, https://doi.org/10.1038/nature08823) noted in laying out the RCP/SSP framework, "An underlying key issue [and, thus, point of low agreement and therefore low confidence] is whether probabilities can be usefully associated with scenarios or different levels of radiative forcing; for example, the probability that concentrations will stabilize above or below a specified level."
Do the authors think that the issue highlight by Moss et al. must be answered affirmatively -- that is, we must show that probabilities can be usefully associated with emissions scenarios -- for climate projections driven by those scenarios to be actionable? If experts cannot agree on probability distributions regarding policy and technological development, does that moot the actionability of any climate projections -- which is to say, any climate projections beyond about a 20-30 year time horizon – that exhibit substantially sensitivity to these deeply uncertain processes? If so, that somewhat moots the discussion of low-confidence ice-sheet processes, which we have little reason to think might manifest at significant levels until late in the century.
I continue to believe that the authors’ actual objection is not to the use of deeply uncertain information in decision making, but the use of such information with decision frameworks that are designed for probabilistic information.
Citation: https://doi.org/10.5194/egusphere-2024-534-CC5
-
CC5: 'If the authors reject decision-making under deep uncertainty (DMDU), they must do so through critical engagement with the DMDU literature', Robert Kopp, 01 Jun 2024
-
RC2: 'Comment on egusphere-2024-534', Rebecca Priestley, 30 May 2024
General comments
I found this paper very interesting. I am familiar with De Conto and Pollard’s 2016 paper, and the subsequent media coverage, but was not aware of the extent to which these projections were taken on by policymakers and practitioners. This case study aspect of the paper is very interesting and valuable (though I note the corrections advised by Chris P. Weaver in the interactive review). I also found the comments about disciplinary journals vs high impact journals (lines 54-64) particularly valuable.
This has potential to be an important paper, so my feedback is quite detailed with much of it focused on precision of language, to ensure clear and purposeful communication of the argument of the paper.
Specific comments on section 2 of the paper
I have specific comments about language use in section 2 of the paper, firstly around use of the word hypothesis (eg, line68: ‘transform novel hypotheses into accepted knowledge’, line 99: ‘A scientific hypothesis is sufficiently accepted for use in decision-making when it is supported by … etc’ and line 102: ‘peer-reviewed hypotheses must be scrutinized by a diverse group of scientists etc’). I was surprised by the focus on the word 'hypothesis' here. Not all science starts with a hypothesis, and even when it does, this word is usually used to describe what comes at the start of a study, not the end. I would have thought that it’s not the ‘peer-reviewed hypothesis’ that is the ‘actionable’ (or not) part of a research project, or the resulting paper. Rather, it’s the peer reviewed conclusions, claims, findings, or theories. Or, as the quote from Behar says, the ‘data, analysis and forecasts’ (line 25).
In the same section, I also suggest a review of the words ‘viewpoints’, ‘opinion’, and ‘assumptions’. Scientists do, of course, have viewpoints, opinions, and assumptions, but this paper is focused on peer-reviewed published research which (we hope!) relies on evidence and observations that lead to claims and conclusions (even if it doesn’t meet the criteria for actionable research). At the moment the paper could imply that scientists make claims in their published research, or IPCC authors make decisions, based only on opinions and assumptions (which could feed into politically motivated narratives seeking to undermine climate science).
I realise that different disciplines have different norms about language use, but with an interdisciplinary paper like this it’s important that the meaning of this language is accessible to a broad readership. I suggest therefore that language use is reviewed, especially around the words I’ve mentioned here.
Specific comments on section 3 of the paper
The first paragraph of section 3 is important, but is not communicating as clearly as it could. In lines 108-110 I suggest removing reference to ‘land ice’. At the moment, the AIS is listed as an example of ‘land ice’ in one sentence, then the next sentence says it ‘contains marine-grounded ice’. To avoid confusion, but not take away any meaning, the reference to ‘land ice’ could be removed and the more standard separation of SLR contributors into thermal expansion, mountain glaciers, the Greenland Ice Sheet and AIS used (as has been done in line 180). Then, in line 110, which says ‘if melted, this ice could raise sea level by several meters’, it needs to be explicit what ice is being referred to here.
In line 140 it would be useful to provide the figures for the De Conto 2021lowered 21st century SLR contribution, to allow comparison with the DP16 figures.
Line 71: states that IPCC assessments ‘are directed mainly to policymakers but are read by practitioners’ – I suggest that the difference between policymakers and practitioners is teased out in this paper, and more emphasis given to the role of policymakers. The publications referred to in section 3 seem to be interpretations by policymakers, that were then actioned by practitioners. In other parts of the paper, though, the emphasis on practitioners suggests that they are actioning science without this layer of interpretation by policymakers. For example in line 213 is it primarily practitioners or policymakers who need to ‘view novel peer-reviewed claims with caution’? In line 225, is it ‘scientists and practitioners’ who need to work more closely together, or scientists and policymakers?
Technical corrections and points of clarification
Line 27: says the term ‘actionable science’ (which I was not familiar with) has been ‘widely adopted’ but there’s only one citation here. More citations here would strengthen this claim.
Line 30: ‘Our goal is to offer guidance …’ who to? Is this guidance for scientists, practitioners, or both?
Line 38: As a science historian I have to note that the discipline is decades on from the ‘lone genius’ approach, as is much popular science history. This is perhaps a traditional approach, or a twentieth century approach, but I’m not sure it’s right to say ‘often’ when referring to current work.
Lines 66, 67: Mentions first ‘press releases’ and then ‘media accounts’. It would be good to explicitly make the connection between the press releases and the media accounts – while the press releases might cast the work in dramatic light, the media stories often go further, and the headlines (which are not written by the journalists) even further than that, with attention seeking headlines. Do you have a citation for the statement that ‘practitioners typically learn of scientific advances through media coverage’? (line 65)
Line 90: citation and page number needed for this quote
Line 95: Makes an important point, but is it also worth noting that opting for ‘higher ground’ is not necessarily guarding against ‘unknown risks’, it could alternatively (or also) be seen as choosing an option with a longer lifespan, given that sea level rise will continue beyond 2100.
Line 186: what does ‘community’ mean in this context? The scientific community?
Line 213: Is ‘contradict’ the right word here, or would ‘challenge’ be more appropriate?
I look forward to your response. As I said at the start, this is a very interesting paper.
Citation: https://doi.org/10.5194/egusphere-2024-534-RC2 -
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC3-supplement.pdf
-
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
-
RC3: 'Comment on egusphere-2024-534', Anonymous Referee #3, 30 May 2024
In this manuscript, Lipscomb et al. discuss the challenges of providing ‘actionable’ scientific research in the context of climate adaptation. In the manuscript, the authors emphasize the importance of distinguishing between novel hypotheses/claims and actionable science that can be used for decision-making. The authors discuss this (also) within the context of a recent high-impact study projecting rapid sea level rise from the Antarctic ice sheet due to a low-confidence process. Overall, Lipscomb et al. propose (1) an epistemic criterion for determining when scientific claims are actionable, based on multiple lines of high-quality evidence and evaluated by a diverse group of experts, (2) recommendations for scientists and practitioners to improve the use of actionable science in decision-making.
This manuscript has clearly attracted lots of interest and sparked productive discussions (see comment section and follow-up AGU presentation highlighted by Chris Weaver, CC2). The authors replied already quite extensively to the main comments raised by Robert Kopp (CC1) and Chris Weaver (CC2). If the authors are willing to include their reply to CC1 and CC2 in the revised version of the manuscript (in particular, the misstatement about the relevance of DP16 on the Sweet et al. (2017) report), I will consider it ready for publication as is; the manuscript is in general very well written and clearly an excellent fit for The Cryosphere (Brief Communication). I do have a couple of minor comments to add, which the authors can see as a suggestion for the revised manuscript. I do not consider these (minor) comments as strictly required for publication - I rather hope they can contribute to the discussion.
1) In general, I agree with the authors on how the epistemic criterion for actionable science is formulated, and with the recommendations to scientists, journalists, and practitioners laid out in Section 4. Both the criterion and the recommendations largely rely on IPCC reports or meta analyses/community assessment. While this makes sense to me, I’m skeptical about how the criterion and recommendations could be applied in practice, as IPCC reports (or other community assessments) are published at much longer timescales than individual studies, and media are typically very quick to pick up high-impact claims (often at the same time a new study comes out for high-impact journals). Even assuming improved awareness and communication between scientists, journalists, and practitioners in the future (which is one recommendation made by the authors and is certainly something we should aspire to), it looks to me that for a case similar to the one presented by the authors to not happen again much would be left to individual choices (for instance: being cautious when making/dealing with new claims). I am fine if the main goal of the authors is to start a discussion on the topics presented, rather than proposing some examples of practical solutions to implement their recommendations. However, I think it would be of great help to see some (more) critical reflection on the latter. For instance: should it become part of the peer-review process to have reviewers providing some level of confidence and/or rating how much a study can be considered reliable or even suitable for media coverage (using for example a formal rating system similar to the one used to evaluate originality, quality, etc.)?
2) Line 65: ‘practitioners typically learn of scientific advances through media coverage’. I think this is quite an important point of focus in the manuscript - the link between scientific results/media coverage/practitioners. I think however that this sentence is a bit too vague, and it would be good to have reference(s) backing it up. If there aren’t, maybe it could make sense to make the example for the DP16 study, but to avoid generalizing ( ‘typically learn’).
Citation: https://doi.org/10.5194/egusphere-2024-534-RC3 -
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC3-supplement.pdf
-
AC3: 'Reply to RC2 and RC3', William Lipscomb, 16 Sep 2024
-
CC4: 'Literature on science usability and decision making context', Jeremy Bassis, 01 Jun 2024
This short paper describes a framework for actionable science. I am writing this comment as a glaciologists who also works broadly in the field of adaptation and science usability. I would really like to encourage the authors and editors to expand the paper to full length to give service to the ideas because in the short format currently there is much ambiguity and with significant room for mischief and harm.
To start, the study of how knowledge systems are applied to decision making often goes under the name “science usability”, but I have also seen it referred to as “actionable science”. There is substantial literature about the process of knowledge creation for planning and decision making. Foundational to this field, is the concept that for knowledge to be applied in a decision making or planning concept requires credibility, salience and legitimacy (see, e.g., Cash et al., 2003). To quote directly from Cash et al., (2003) “. . . . credibility involves the scientific adequacy of the technical evidence and arguments. Salience deals with the relevance of the assessment to the needs of decision makers. Legitimacy reflects the perception that the production of information and technology has been respectful of stakeholders' divergent values and beliefs, unbiased in its conduct, and fair in its treatment of opposing views and interests.”
The definition of “actionable” that the authors present only includes what would be traditionally called “credibility”, omitting salience and legitimacy completely. This is a fatal omission from a usability standpoint because there is abundant, credible literature that demonstrates the importance of salience and legitimacy (e.g., Cash, 2003). To give two examples of salience, communities adapt to local sea level not global sea level and thus global sea level rise might have low salience and thus low usability for many decision makers. Or century and longer sea level rise projections have little salience for, say 30 year home mortgages. Legitimacy is also important as it impacts the process and values of science creation and there are numerous examples where low legitimacy jeopardized science usability.
Clearly usability depends on the specific decisions and the community making decisions and, as noted by another commenter, usability cannot be divorces from this context: actionable in one decision context does not mean actionable in a different decision context. We also know that the way to increase the usability of knowledge is through co-generation, but that this is inefficient. Usually it is knowledge brokers and boundary organizations that play the role of interpreters and translators of knowledge to users. There is, again, abundant literature in this field and it is a shame to have no dialogue with the many previous studies that identify criteria AND the process of usable knowledge production.
My view is that the present study has *much* more value if this study is reframed more specifically around credibility rather than more broadly about usability/actionable. I would even recommend switching out the term "actionable" for "credible" to better reflect existing terminology. Even here, I would have really like the authors to examine the role of values and tradeoffs in defining credibility, especially recognizing that scientific uncertainty can often be weaponized to delay action.
Determining if a study has multiple lines of evidence to support it is, as we can see from the comments, subjective and the authors have not demonstrated that their definition improves decision making. As the current paper reads, it seems more like it is specifically designed as a refutal of DeConto and Pollard’s (2016) projections and the impact they have had as a high-end scenario in planning and adaptation decisions rather than as a general framework. If that is the goal then the study should, perhaps be framed as such.
To give some additional examples, the explosive disintegration of the Larsen B ice shelf and subsequent acceleration of tributary was not anticipated by any models nor was the sudden acceleration and retreat of Jakobshavn, Pine Island or Thwaites Glacier. The disintegration of Conger ice shelf was also a surprise, although perhaps shouldn’t have been. Going beyond glaciology, the pre-eminent physicists of the time Lord Kelvin famously said that “Heavier than air flying machines are impossible” a mere 8 years before the first airplane successfully flew (Shoemaker, 1995). The Harvard Economic Society announced that “A severe depression like that of 1920-1921 is outside the range of probability” on November 16, 1929 (Shoemaker, 1995). I could go on with other examples in which real world events failed to adhere to the academic consensus. Here the point isn’t that scientists can be wrong (of course we are!), but that how communities, stakeholders and decision makers choose to incorporate information depends heavily dependent on values, resources and objectives. Would we better off stress testing, strategizing or planning to deal with low probability, high impact events? I don’t know and I I would be loath for scientists to insist on being on the arbiter of these value laden decisions. Actionable doesn't always mean infrastructure and it might mean longer term strategic thinking to be better positioned to incorporate new information as it becomes available. Perhaps one useful framing for the authors is that their approach might be most suited to mega infrastructure investments that are expensive and take decades to plan and build. Different criteria would be appropriate for different scales of intervention.
My penultimate point, illustrates the degree of mischief that is possible if terminology and definitions aren’t tightened up and illustrated with multiple case studies. The authors argue that DeConto and Pollard’s study of sea level rise isn’t credible (actionable in their words) because there isn’t multiple lines of evidence to support ice cliff instability. I suspect that the paleo-record results in more ambiguity than the authors concede here, but nonetheless this is a fair point. However, we can also say that there is zero evidence that the calving front will remain stationary and very little evidence to support *any* calving law used in ice sheet modeling. By the authors same logic, we might conclude that *none* of the projections of sea level rise are credible (actionable in the authors words). We can apply this same reasoning to other processes. Should we doubt the entire enterprise of climate modeling because of the quasi-empirical treatment of clouds , precipitation and aerosols? Clearly, this is not what the authors intend, but it might be an unintentional side effect if not clarified. To this end, the IPCC, NOAA, NASA Sea level team and multiple organizations are involved in summarizing literature and play a well documented role in determining “credibility” and these organizations form one step in a chain of boundary organizations. Here there needs to be dialogue with the role played by these existing structures and organizations and the interplay with other boundary organizations.
My final point relates to the definition of “action”. As noted by Knaggard (2014), one of the common actions that stakeholders take is decide that the scientific uncertainty is currently too large and devote resources to research to better quantify and hopefully reduce uncertainties. By this definition, the action associated with high impact/low probability impacts might be more research (potentially coupled with adaptive decision making). However, the most common action that decision makers take, however, is simply to do what is currently politically and technically feasible (Knaggard, 2014). These decisions, based on political expediency and co-benefits depend more on the solution space than scientific uncertainty and so establishing credibility is less important than salience and legitimacy.
I’m very sympathetic to the authors goals, but I’m not sure that, to use the authors definition, their definition of "actionable" has multiple, independent lines of evidence to support its use and adoption. I think it would be hard to demonstrate this in the limited space available, but I think it would be a great addition if the authors built their case through a larger number of case studies and literature review and applied their own criteria of multiple lines of evidence to their own proposed definition of actionable science.
Refs
Cash, David W., et al. "Knowledge systems for sustainable development." Proceedings of the national academy of sciences 100.14 (2003): 8086-8091. And many more on usability!Schoemaker, Paul JH. "Scenario planning: a tool for strategic thinking." MIT Sloan Management Review (1995).
Åsa Knaggård (2014) What do policy-makers do with scientific uncertainty? The incremental character of Swedish climate change policy-making, Policy Studies, 35:1, 22-39, DOI: 10.1080/01442872.2013.804175
All the best,
Jeremy BassisCitation: https://doi.org/10.5194/egusphere-2024-534-CC4 -
CC6: 'Comment on egusphere-2024-534', Judy Lawrence, 06 Jun 2024
Comment on Lipscomb, Behar, Morrison Brief Communication
By Judy Lawrence, Marjolijn Haasnoot, Robert Lempert
We are researchers and science brokers who work with SLR science in the context of decision making under uncertainty (DMDU). We recognize the challenge of dealing with multiple scenarios that are frequently updated as new understanding emerges. However, the proposal set out by LBM runs the risk that we wait until there is certainty before taking adaptation action. This would also not meet the precautionary test in the UNFCC which underpins the Paris Agreement, and lead to adaptation decision delay.
There are approaches that have emerged and assessed in IPCC AR6 for considering the dilemma that decision makers find themselves in as new science continues to emerge and decisions have to be made under uncertainty over the trajectory and pace of change. These DMDU approaches and methods are able to deal with large uncertainties and updated (climate) information. The LBM paper mentions adaptive planning and integrating SLR information into planning but does not elaborate what this means or how these approaches can address the problem of over and under investment in adaptation. This omission leads to a very directive solution that would not be decision relevant (salient nor legitimate).
Furthermore, we agree with Bassis's comment that the LBM paper only focuses on the matter of credibility and that salience and legitimacy are critical for decision making under uncertainty to enable local context to inform decisions. The response from LBM that the paper is not about how the science is used, misses the point that science does not sit in a vacuum outside how it may be used. In fact the authors have defined their problem with an example of how the projections were ‘misused’ in the San Francisco example. Science and its use are inextricably linked.
One cannot set a fixed criteria/standard for a changing context (both science and societal values-which decision makers weigh up). The question of the standard of the science cannot be divorced from a discussion about how sea-level rise projections are used. Using DMDU methods such as Dynamic Adaptive Pathways Planning (DAPP; Haasnoot et al. 2019) or Robust Decision Making (Lempert et al. 2019) to stress test adaptation options against a range of scenarios gives decision makers an idea of how sensitive each strategy is and then this information can be weighed against the other considerations that the decisionmakers must take into account as representatives of their communities, now and in the future. Given that surprises around polar ice processes and feedback are occurring, even high-end/low confidence scenarios are useful to remind decision makers that one cannot rule out such outcomes, as also stated in IPCC AR6. A DAPP approach enables adaptation to be broken down into near-term actions and mid-to long-term options to avoid lock in of investments that are costly to adjust in the future and which shift the adaptation costs to future generations. It helps to prepare and keep options open or create them through innovation and planning, allowing further adaptation if necessary. DAPP thus enables more robust decisions to be made and contingency actions to be ready.
Given the time it takes for long lived infrastructure projects to be designed and implemented, especially for coastal adaptation, to high and rapid sea level rise, a precautionary approach has merit. Not considering such scenarios can run the risk of being too late or invest in the wrong measures resulting in high sunk costs and transfer costs. Considering them on the other hand gives decision makers greater confidence in a changing situation (science and societal values).
The nature of the decision process is lightly addressed in the paper beyond one example. A range of approaches as to how SLR projections can be used should be proffered. These can be found in the literature.
For example, in the Netherlands a group of experts (the Signal Group) advise the government on relevant new research which is then assessed for its potential implication and need for further research or actions (Haasnoot et al. 2018). The follow-up research from Deltares and KNMI lead to further assessment on the need to reassess the adaptation strategy. It also raised awareness of the long-term higher sea levels, even if they were in low global warming scenarios, and that the current adaptive plan would not be sufficient, and that transformative adaptation would be needed. While near-term actions were not changed immediately, it was recognised that preparations were needed to be able to further adapt, if such an accelerated SLR became a reality.
In New Zealand, revised national coastal hazards and climate change guidance (Ministry for the Environment 2024) has adopted a tailored approach for different types of decisions, using DAPP to stress test adaptation options with downscaled global scenarios that enable polar ice responses and vertical land movement to be incorporated into SLR which vary around the coast. Periodic revisions are undertaken to reflect new science and a precautionary approach is adopted considering at least a 100year timeframe to account for change and uncertainty. Specific infrastructure guidance is also available (Lawrence and Allison, 2024).
Adaptive pathways planning and monitoring for signals of is also done in practice in the Thames Estuary plan (Environment Agency, 2012; Ranger et al 2013) and New York (Blake et al., 2019; Rozenzweig). They also plan regular ongoing assessment and evaluation of changes (e.g. ongoing and every 6 six years in New York and in the Netherlands).
References
Environment Agency. (2012). TE2100 Plan. Managing Flood Risk Through London and the Thames Estuary.
Haasnoot, M., van ’t Klooster, S., & van Alphen, J. (2018). Designing a monitoring system to detect signals to adapt to uncertain climate change. Global Environmental Change, 52, 273–285. https://doi.org/https://doi.org/10.1016/j.gloenvcha.2018.08.003
Haasnoot, M., Warren, A., & Kwakkel, J. H. (2019a). Dynamic Adaptive Policy Pathways (DAPP) - Decision Making under Deep Uncertainty: From Theory to Practice (V. A. W. J. Marchau, W. E. Walker, P. J. T. M. Bloemen, & S. W. Popper, Eds.; pp. 71–92). Springer International Publishing. https://doi.org/10.1007/978-3-030-05252-2_4
Lawrence, J., & Allison, A. (2024). Guidance on adaptive infrastructure decision-making for addressing compound climate change impacts. https://deepsouthchallenge.co.nz/wp-content/uploads/2024/04/Guidance-on-adaptive-decision-making-for-addressing-compound-climate-change-impacts-for-infrastructure.pdf
Lempert, R. (2013). Scenarios that illuminate vulnerabilities and robust responses. Climatic Change, 117(4), 627–646. https://doi.org/10.1007/s10584-012-0574-6
Ministry for the Environment. 2024. Coastal hazards and climate change guidance. Wellington: Ministry for the Environment. Pub. 1805 https://environment.govt.nz/assets/publications/Coastal-hazards-and-climate-change-guidance-2024-ME-1805.pdf
Ranger, N., Reeder, T., & Lowe, J. (2013). Addressing `deep’ uncertainty over long-term climate in major infrastructure projects: four innovations of the Thames Estuary 2100 Project. EURO Journal on Decision Processes, 1(3), 233–262. https://doi.org/10.1007/s40070-013-0014-5
Citation: https://doi.org/10.5194/egusphere-2024-534-CC6 -
AC4: 'Comment on egusphere-2024-534', William Lipscomb, 24 Oct 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-534/egusphere-2024-534-AC4-supplement.pdf
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
1,201 | 312 | 164 | 1,677 | 27 | 31 |
- HTML: 1,201
- PDF: 312
- XML: 164
- Total: 1,677
- BibTeX: 27
- EndNote: 31
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1