the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Technical Note: Quantifying Hazard Probability and Risk from Ensemble Projections of Downscaled Climate Variables
Abstract. Hazard metrics downscaled from climate model projections are commonly used for assessing future risk and related potential losses at local spatial scales. Quantifying changes in risk into actionable information is essential for building resilience to climate change through adaptation of existing operational assets, siting and design of future assets, identifying transition risks as well as opportunities, determining optimal paths towards net-zero carbon operations, and assessing future cost/benefit ratios for many other current and future actions. In addition to projecting the most-likely, or expected, values for a given hazard, it is important to quantify the probability distribution for that hazard at any specified time in the future. Here we describe a method to incorporate uncertainty in the downscaled hazard metrics to produce a probabilistic forecast, first for any single model, and then for a multi-model ensemble. The uncertainty for any single model represents an estimate of the natural variability of the hazard that is intrinsic to that model, while the uncertainty in the ensemble represents the natural variability of all the models as well as the spread of each model’s projected most-likely value of the hazard. Loss probability can then be determined from the hazard probability via application of impact (or damage) functions that link hazard to loss. The method is first applied to a simple temperature-based hazard variable and then to a multi-climate-variable-based hazard (fluvial flood). More general application procedures are also discussed.
- Preprint
(10166 KB) - Metadata XML
- BibTeX
- EndNote
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2022-9', Richard Rosen, 05 Mar 2022
As with all complex modeling of the physics of the real world, all the outputs of such models should not be described as the "probabilities" of any particular event actually happening, since those numbers only reflect the properties of the models, and not the liklihood of real world events. I think the language of the text should be very carefully edited to reflect this fact, and so that readers of this and similar articles about climate change models are not deceived.
Citation: https://doi.org/10.5194/egusphere-2022-9-CC1 -
AC1: 'Reply on CC1', James Kossin, 07 Mar 2022
Thanks for the comment. I do agree to an extent. I agree that we could say "modeled hazard probability", similarly to saying "modeled average annual loss", and I think it would be appropriate to add a note about this in the introductory text. But to suggest that it would be a deception to infer a relationship with the probability of real world events means that we have no confidence that the CMIP models provide some representation of a future reality. We would respectfully disagree with this. The CMIP models are what most IPCC projections are based on and are very broadly used in policy-making. We don't view our motivation here as questioning the veracity of the models. We only aim to provide probabilistic bounds on what the models are telling everyone.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC1 -
CC2: 'Reply on AC1', Richard Rosen, 07 Mar 2022
I agree that the earth system models relied on my the IPCC in their reports have some correspondance with reality, but as I am sure you know, their results differ widely from each other even with respect to projections of the global average temperature increase. That is they differ by almost a factor of 3.0 from low to high due to a doubling of CO2 in the atmoshere. Yet, it should be easiest to predict the global average temperature most accurately. Localized regional temperature forecasts should inherently be even less accurate, and precipitation forecasts are known to be less accurate than temperature forecasts. Thus, using the word "probability", whether in your article or in IPCC reports, has always been controversial. Many of us have urged the IPCC to use a term like "model distribution or model liklihood", indicating that the numerical distributions cited are purely a product of model runs, and that we as scientists have no idea of the actually real world probability distributions. In fact, looking at actual global temperature trend data over the past 40 years, to my eye the trend is running much closer to the mid-range of the model results than the model result distribution would suggest. That is, intuitively to me, the uncertainty in the global temperature increase for a doubling of CO2 might only be from 2.4 to 3.2 degrees C, not from 1.6 to 4.5 degrees C like the model results produce. But either way, the use of the word "probability" in most climate change papers is very deceptive for journalists and the public because it suggests that climate scientists know much more than they do.
In summary, if you are trying to calculate uncertainty ranges for regional temperature and precipitation impacts, the uncertainty ranges producted by climate models are so large that use of the term "probability" is misleading, if not deceptive, and we all must be careful in our use of mathematical terminology. Note, a further problem when try to calculate "probabilities" is that some distribution, like a Gaussian curve, is often assumed, which also has no basis in scientific fact.
Citation: https://doi.org/10.5194/egusphere-2022-9-CC2 -
AC2: 'Reply on CC2', James Kossin, 09 Mar 2022
Thanks again for the follow-up comment. Your points are well taken. To be completely transparent here, given the nuts-and-bolts aspect of writing technical notes, we never considered the mathematical formalism underpinning the differences between likelihood and probability. In this respect we recognize now that we were being careless with our use of “probability”. To maintain better mathematical formalism, and to avoid compounding the concerns you’ve raised about potential mis-messaging, we will modify our language. We’ll change the title from
Technical Note: Quantifying Hazard Probability and Risk from Ensemble Projections of Downscaled Climate Variables
to
Technical Note: Quantifying Modeled Hazard Likelihood and Risk from Ensemble Projections of Downscaled Climate Variables
and make similar modifications throughout the text. We’ll normalize the likelihood distributions so that we can derive what we will now call “modeled likelihood of exceedance curves”.
Let us know if you feel that this will adequately address your concerns. If not, we're happy to continue the dialog.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC2 -
CC3: 'Reply on AC2', Richard Rosen, 09 Mar 2022
Your proposed changes are very constructive. To repeat, the term "probability" gives readers a sense of far to much precision in the number. The IPCC continues to compound the problem when their reports say things like "to keep the temperature increase under 1.5 degrees C with a 67 percent probability the world needs to ...............".
Citation: https://doi.org/10.5194/egusphere-2022-9-CC3 -
AC3: 'Reply on CC3', James Kossin, 10 Mar 2022
I'll consider the comments on the manuscript to be addressed at this point. But I would like to continue the dialog a bit along a parallel thread.
The way I see it, there are two disparate issues that we’re discussing here. The first is a matter of mathematical formalism, i.e., the difference between the formal definitions of probability and likelihood. I agree that it’s important to be correct, but I also think that this distinction will be lost on the majority of readers. The second issue I see as far more critical and far-reaching. This is the issue of implying more confidence in the projections than are supported. For example, the kind of IPCC statement that you’re describing:
“to keep the temperature increase under 1.5 degrees C with a 67 percent probability the world needs to ...............”
would probably not be interpreted much differently if we stated
“to keep the temperature increase under 1.5 degrees C with a 67 percent likelihood the world needs to ...............”
In this respect, I think that you may be placing too much emphasis on the wording at the expense of addressing the root issue. Here I would argue that the more liberal use of the grammatical qualifier “modeled” would address the issue more effectively than the use of likelihood in place of probability.
In our manuscript, in response to your comments, we have done both. That is, we replaced probability with likelihood to be more mathematically correct, but more importantly, we have included the qualifier “modeled” throughout the text. We have also added a new closing paragraph in the summary/discussion section that emphasizes interpretating our results with full knowledge of the limitations.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC3
-
AC3: 'Reply on CC3', James Kossin, 10 Mar 2022
-
CC3: 'Reply on AC2', Richard Rosen, 09 Mar 2022
-
AC2: 'Reply on CC2', James Kossin, 09 Mar 2022
-
CC2: 'Reply on AC1', Richard Rosen, 07 Mar 2022
-
AC1: 'Reply on CC1', James Kossin, 07 Mar 2022
-
RC1: 'Comment on egusphere-2022-9', Anonymous Referee #1, 09 Apr 2022
Summary:
This technical note presents a method to estimate changes in risk due to a changing climate while characterizing sources of uncertainty. It is a useful effort in that resource managers need to make decisions on the response to increased climate-driven risks, yet there is often a divide between the advances in climate science and the delivery of information targeted at stakeholder needs. While as a technical note it is not expected to be a lengthy manuscript, the paper misses important context which weakens its contribution. Detailed comments follow.
Comments:
- Line 34, “Often this information is provided based on projections of the most-likely value of some hazard variable at some particular time” should be augmented by noting that many current studies remove time from the equation and present projected impacts for a certain level of warming or accumulated CO2 emitted (which removes the difficulty in considering the different emissions pathways independently).
- Lines 40-44, The discussion of natural variability versus model spread reminded me of the foundational work by Hawkins & Sutton (2009; 2011), who also fit a polynomial to projections to separate external from internal variability, and then added variability due to model spread to their analysis. I was surprised to see no reference to their work (let alone the many papers that have built on that over the last decade), given that this is quite similar to the approach presented here. There have also been dozens of papers presenting changes in return periods of extreme events (extreme precipitation, peak flows, extreme sea levels, …), presenting the results in many different ways. This paper needs a much better review of past work in this area so what may be new can be discerned.
- Lines 45-51, the NASA-NEX CMIP5 downscaled data was used from this study. It would be helpful to remind readers that there are many options for downscaling climate model output. In particular for this data study, some caveats should be noted. For example, the daily BCSD method has been shown to underestimate extreme precipitation events, with the monthly version outperforming it on some important metrics (Gutmann et al., WRR, 2014). While the paper is presenting a method that could be used with any downscaled data, that would be important information to provide.
- Following on the last comment, additional uncertainties should also be mentioned, if not included in the analysis. The choice of downscaling method can have a significant effect on the results. For example, using downscaled data from https://gdo-dcp.ucllnl.org/downscaled_cmip_projections or https://cal-adapt.org/ could help frame some of these uncertainties.
- Another source of uncertainties is the choice of impact model. As one example, the rudimentary AECOM model is used, but applying any other model would certainly give different results. Has the AECOM model been described and validated in another peer-reviewed publication? If it has not been presented for peer review somewhere, its use here does not seem supported unless it is presented and validated. The only reference provided is an untraceable professional report. There are many other simple hydrologic models available, many of which have been used in climate change studies. A brief review of some of them would be helpful, with an explanation for why the AECOM was selected for this demonstration.
- A minor comment: line 74 “has little physical relevance” might be more accurately phrased “has no temporal correspondence with observations”.
Citation: https://doi.org/10.5194/egusphere-2022-9-RC1 -
RC2: 'Comment on egusphere-2022-9', Anonymous Referee #2, 18 May 2022
In the absract, the use of the word “forecast” is troublesome. Climate models do not produce forecasts, at least not in any statistical sense. Climate models produce projections based on assumptions about the greenhouse gas emissions associated with a given scenario, along with other forcings. These scenarios may or may not be connected to any reality. As such it is crucial to be careful with language, especially when trying to lay a foundation for a probabilistic interpretation.
The data set does not appear to include multiple ensemble members for each model. How would this approach change with the inclusion of ensemble members for each model? Similarly, is it required that the models be bias corrected, which is a part of the downscaling procedure used in this data set? What is the impact of the biases in the target dataset used in the downscaling?
There is some inconsistency in the choice of model for Tx90p. In line 95, the authors note that Tx90p is constrained to be between [0,100]. (It is not mentioned, but it is also a discrete variable with a finite set of possible values as it is defined on line 55.) The authors then choose to use a Gaussian distribution, rather than the truncated distribution as suggested in the manuscript near line 100 or some other distribution more in line with a discrete variable. The Gaussian might offer a reasonable approximation, but this should be explored further and justified accordingly.
Multi-model ensembles pose significant challenges, and the authors have chosen a fairly simplistic approach of a mixture model with equal model weights based on Weigel, et al., 2010. While the Weigel paper is an important contribution to the discussion on model weighting, much work has been done in the interim, including by a co-author on the Weigel paper. While a technical note is likely not the right place to delve into the many issues, it is remiss to downplay them particularly when the paper is trying to establish a probabilistic framework. There are many approaches to model weighting, including Bayesian model averaging and the more recent climate model weighting by independence and performance (ClimWIP), among many others, that address skill and model dependence. However, the issue with model weighting is not only about adjusting weights to account for skill or dependence between models due, for example, to shared components, but to whether the multi-model ensemble actually represents the range of uncertainty needed to justify a probabilistic statement (e.g., Chandler, 2013). At the very least, the authors should acknowledge the complexity of the issue and more recent work in the area, as well as the challenges that go beyond equal vs unequal weights and impact the very premise of their work.
The OEP formula near line 145 requires the assumption of independent events. There are many physical processes that exist on time scales longer than a year, and the authors have worked to include them through modeling of the mean and filtering. Can you assume years are independent? (This is somewhat different than using the OEP from catastrophe modeling point of view where different disasters are more naturally modeled as independent of each other.)
Near line 165, the authors note that they remove inter-annual variability using a moving window filter. However, they already have the quadratic best-fit - is this not enough? Or is there other temporal variability they are trying to keep (e.g., decadal)? A bit more detail is necessary here, including the type of filter.
The authors use a “best-fit” (least-squares?) quadratic curve to remove the trend. For certain climate variables, the GEV is then fit via maximum likelihood to the residuals and then the location parameter is shifted by the value of the best-fit curve. The best-fit curve should represent the mean of the distribution. Given that the location parameter of the GEV is not the mean (the mean of the GEV is a function of all three parameters), this could have some unintended consequences, including distributions at points in time that are not valid due to the finite lower/upper bound on GEV depending on the value of the shape parameter. If it is appropriate to place a mean on the location parameter, this can easily be done directly through maximum likelihood (or in a Bayesian framework) and such an approach is easily accomplished through, for example, the extRemes package in R.
Chandler Richard E. 2013 Exploiting strength, discounting weakness: combining information from multiple climate simulators Phil. Trans. R. Soc.A.3712012038820120388 http://doi.org/10.1098/rsta.2012.0388
Citation: https://doi.org/10.5194/egusphere-2022-9-RC2
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2022-9', Richard Rosen, 05 Mar 2022
As with all complex modeling of the physics of the real world, all the outputs of such models should not be described as the "probabilities" of any particular event actually happening, since those numbers only reflect the properties of the models, and not the liklihood of real world events. I think the language of the text should be very carefully edited to reflect this fact, and so that readers of this and similar articles about climate change models are not deceived.
Citation: https://doi.org/10.5194/egusphere-2022-9-CC1 -
AC1: 'Reply on CC1', James Kossin, 07 Mar 2022
Thanks for the comment. I do agree to an extent. I agree that we could say "modeled hazard probability", similarly to saying "modeled average annual loss", and I think it would be appropriate to add a note about this in the introductory text. But to suggest that it would be a deception to infer a relationship with the probability of real world events means that we have no confidence that the CMIP models provide some representation of a future reality. We would respectfully disagree with this. The CMIP models are what most IPCC projections are based on and are very broadly used in policy-making. We don't view our motivation here as questioning the veracity of the models. We only aim to provide probabilistic bounds on what the models are telling everyone.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC1 -
CC2: 'Reply on AC1', Richard Rosen, 07 Mar 2022
I agree that the earth system models relied on my the IPCC in their reports have some correspondance with reality, but as I am sure you know, their results differ widely from each other even with respect to projections of the global average temperature increase. That is they differ by almost a factor of 3.0 from low to high due to a doubling of CO2 in the atmoshere. Yet, it should be easiest to predict the global average temperature most accurately. Localized regional temperature forecasts should inherently be even less accurate, and precipitation forecasts are known to be less accurate than temperature forecasts. Thus, using the word "probability", whether in your article or in IPCC reports, has always been controversial. Many of us have urged the IPCC to use a term like "model distribution or model liklihood", indicating that the numerical distributions cited are purely a product of model runs, and that we as scientists have no idea of the actually real world probability distributions. In fact, looking at actual global temperature trend data over the past 40 years, to my eye the trend is running much closer to the mid-range of the model results than the model result distribution would suggest. That is, intuitively to me, the uncertainty in the global temperature increase for a doubling of CO2 might only be from 2.4 to 3.2 degrees C, not from 1.6 to 4.5 degrees C like the model results produce. But either way, the use of the word "probability" in most climate change papers is very deceptive for journalists and the public because it suggests that climate scientists know much more than they do.
In summary, if you are trying to calculate uncertainty ranges for regional temperature and precipitation impacts, the uncertainty ranges producted by climate models are so large that use of the term "probability" is misleading, if not deceptive, and we all must be careful in our use of mathematical terminology. Note, a further problem when try to calculate "probabilities" is that some distribution, like a Gaussian curve, is often assumed, which also has no basis in scientific fact.
Citation: https://doi.org/10.5194/egusphere-2022-9-CC2 -
AC2: 'Reply on CC2', James Kossin, 09 Mar 2022
Thanks again for the follow-up comment. Your points are well taken. To be completely transparent here, given the nuts-and-bolts aspect of writing technical notes, we never considered the mathematical formalism underpinning the differences between likelihood and probability. In this respect we recognize now that we were being careless with our use of “probability”. To maintain better mathematical formalism, and to avoid compounding the concerns you’ve raised about potential mis-messaging, we will modify our language. We’ll change the title from
Technical Note: Quantifying Hazard Probability and Risk from Ensemble Projections of Downscaled Climate Variables
to
Technical Note: Quantifying Modeled Hazard Likelihood and Risk from Ensemble Projections of Downscaled Climate Variables
and make similar modifications throughout the text. We’ll normalize the likelihood distributions so that we can derive what we will now call “modeled likelihood of exceedance curves”.
Let us know if you feel that this will adequately address your concerns. If not, we're happy to continue the dialog.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC2 -
CC3: 'Reply on AC2', Richard Rosen, 09 Mar 2022
Your proposed changes are very constructive. To repeat, the term "probability" gives readers a sense of far to much precision in the number. The IPCC continues to compound the problem when their reports say things like "to keep the temperature increase under 1.5 degrees C with a 67 percent probability the world needs to ...............".
Citation: https://doi.org/10.5194/egusphere-2022-9-CC3 -
AC3: 'Reply on CC3', James Kossin, 10 Mar 2022
I'll consider the comments on the manuscript to be addressed at this point. But I would like to continue the dialog a bit along a parallel thread.
The way I see it, there are two disparate issues that we’re discussing here. The first is a matter of mathematical formalism, i.e., the difference between the formal definitions of probability and likelihood. I agree that it’s important to be correct, but I also think that this distinction will be lost on the majority of readers. The second issue I see as far more critical and far-reaching. This is the issue of implying more confidence in the projections than are supported. For example, the kind of IPCC statement that you’re describing:
“to keep the temperature increase under 1.5 degrees C with a 67 percent probability the world needs to ...............”
would probably not be interpreted much differently if we stated
“to keep the temperature increase under 1.5 degrees C with a 67 percent likelihood the world needs to ...............”
In this respect, I think that you may be placing too much emphasis on the wording at the expense of addressing the root issue. Here I would argue that the more liberal use of the grammatical qualifier “modeled” would address the issue more effectively than the use of likelihood in place of probability.
In our manuscript, in response to your comments, we have done both. That is, we replaced probability with likelihood to be more mathematically correct, but more importantly, we have included the qualifier “modeled” throughout the text. We have also added a new closing paragraph in the summary/discussion section that emphasizes interpretating our results with full knowledge of the limitations.
Citation: https://doi.org/10.5194/egusphere-2022-9-AC3
-
AC3: 'Reply on CC3', James Kossin, 10 Mar 2022
-
CC3: 'Reply on AC2', Richard Rosen, 09 Mar 2022
-
AC2: 'Reply on CC2', James Kossin, 09 Mar 2022
-
CC2: 'Reply on AC1', Richard Rosen, 07 Mar 2022
-
AC1: 'Reply on CC1', James Kossin, 07 Mar 2022
-
RC1: 'Comment on egusphere-2022-9', Anonymous Referee #1, 09 Apr 2022
Summary:
This technical note presents a method to estimate changes in risk due to a changing climate while characterizing sources of uncertainty. It is a useful effort in that resource managers need to make decisions on the response to increased climate-driven risks, yet there is often a divide between the advances in climate science and the delivery of information targeted at stakeholder needs. While as a technical note it is not expected to be a lengthy manuscript, the paper misses important context which weakens its contribution. Detailed comments follow.
Comments:
- Line 34, “Often this information is provided based on projections of the most-likely value of some hazard variable at some particular time” should be augmented by noting that many current studies remove time from the equation and present projected impacts for a certain level of warming or accumulated CO2 emitted (which removes the difficulty in considering the different emissions pathways independently).
- Lines 40-44, The discussion of natural variability versus model spread reminded me of the foundational work by Hawkins & Sutton (2009; 2011), who also fit a polynomial to projections to separate external from internal variability, and then added variability due to model spread to their analysis. I was surprised to see no reference to their work (let alone the many papers that have built on that over the last decade), given that this is quite similar to the approach presented here. There have also been dozens of papers presenting changes in return periods of extreme events (extreme precipitation, peak flows, extreme sea levels, …), presenting the results in many different ways. This paper needs a much better review of past work in this area so what may be new can be discerned.
- Lines 45-51, the NASA-NEX CMIP5 downscaled data was used from this study. It would be helpful to remind readers that there are many options for downscaling climate model output. In particular for this data study, some caveats should be noted. For example, the daily BCSD method has been shown to underestimate extreme precipitation events, with the monthly version outperforming it on some important metrics (Gutmann et al., WRR, 2014). While the paper is presenting a method that could be used with any downscaled data, that would be important information to provide.
- Following on the last comment, additional uncertainties should also be mentioned, if not included in the analysis. The choice of downscaling method can have a significant effect on the results. For example, using downscaled data from https://gdo-dcp.ucllnl.org/downscaled_cmip_projections or https://cal-adapt.org/ could help frame some of these uncertainties.
- Another source of uncertainties is the choice of impact model. As one example, the rudimentary AECOM model is used, but applying any other model would certainly give different results. Has the AECOM model been described and validated in another peer-reviewed publication? If it has not been presented for peer review somewhere, its use here does not seem supported unless it is presented and validated. The only reference provided is an untraceable professional report. There are many other simple hydrologic models available, many of which have been used in climate change studies. A brief review of some of them would be helpful, with an explanation for why the AECOM was selected for this demonstration.
- A minor comment: line 74 “has little physical relevance” might be more accurately phrased “has no temporal correspondence with observations”.
Citation: https://doi.org/10.5194/egusphere-2022-9-RC1 -
RC2: 'Comment on egusphere-2022-9', Anonymous Referee #2, 18 May 2022
In the absract, the use of the word “forecast” is troublesome. Climate models do not produce forecasts, at least not in any statistical sense. Climate models produce projections based on assumptions about the greenhouse gas emissions associated with a given scenario, along with other forcings. These scenarios may or may not be connected to any reality. As such it is crucial to be careful with language, especially when trying to lay a foundation for a probabilistic interpretation.
The data set does not appear to include multiple ensemble members for each model. How would this approach change with the inclusion of ensemble members for each model? Similarly, is it required that the models be bias corrected, which is a part of the downscaling procedure used in this data set? What is the impact of the biases in the target dataset used in the downscaling?
There is some inconsistency in the choice of model for Tx90p. In line 95, the authors note that Tx90p is constrained to be between [0,100]. (It is not mentioned, but it is also a discrete variable with a finite set of possible values as it is defined on line 55.) The authors then choose to use a Gaussian distribution, rather than the truncated distribution as suggested in the manuscript near line 100 or some other distribution more in line with a discrete variable. The Gaussian might offer a reasonable approximation, but this should be explored further and justified accordingly.
Multi-model ensembles pose significant challenges, and the authors have chosen a fairly simplistic approach of a mixture model with equal model weights based on Weigel, et al., 2010. While the Weigel paper is an important contribution to the discussion on model weighting, much work has been done in the interim, including by a co-author on the Weigel paper. While a technical note is likely not the right place to delve into the many issues, it is remiss to downplay them particularly when the paper is trying to establish a probabilistic framework. There are many approaches to model weighting, including Bayesian model averaging and the more recent climate model weighting by independence and performance (ClimWIP), among many others, that address skill and model dependence. However, the issue with model weighting is not only about adjusting weights to account for skill or dependence between models due, for example, to shared components, but to whether the multi-model ensemble actually represents the range of uncertainty needed to justify a probabilistic statement (e.g., Chandler, 2013). At the very least, the authors should acknowledge the complexity of the issue and more recent work in the area, as well as the challenges that go beyond equal vs unequal weights and impact the very premise of their work.
The OEP formula near line 145 requires the assumption of independent events. There are many physical processes that exist on time scales longer than a year, and the authors have worked to include them through modeling of the mean and filtering. Can you assume years are independent? (This is somewhat different than using the OEP from catastrophe modeling point of view where different disasters are more naturally modeled as independent of each other.)
Near line 165, the authors note that they remove inter-annual variability using a moving window filter. However, they already have the quadratic best-fit - is this not enough? Or is there other temporal variability they are trying to keep (e.g., decadal)? A bit more detail is necessary here, including the type of filter.
The authors use a “best-fit” (least-squares?) quadratic curve to remove the trend. For certain climate variables, the GEV is then fit via maximum likelihood to the residuals and then the location parameter is shifted by the value of the best-fit curve. The best-fit curve should represent the mean of the distribution. Given that the location parameter of the GEV is not the mean (the mean of the GEV is a function of all three parameters), this could have some unintended consequences, including distributions at points in time that are not valid due to the finite lower/upper bound on GEV depending on the value of the shape parameter. If it is appropriate to place a mean on the location parameter, this can easily be done directly through maximum likelihood (or in a Bayesian framework) and such an approach is easily accomplished through, for example, the extRemes package in R.
Chandler Richard E. 2013 Exploiting strength, discounting weakness: combining information from multiple climate simulators Phil. Trans. R. Soc.A.3712012038820120388 http://doi.org/10.1098/rsta.2012.0388
Citation: https://doi.org/10.5194/egusphere-2022-9-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
784 | 335 | 39 | 1,158 | 20 | 24 |
- HTML: 784
- PDF: 335
- XML: 39
- Total: 1,158
- BibTeX: 20
- EndNote: 24
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1