the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Global downscaled projections for climate impacts research (GDPCIR): preserving extremes for modeling future climate impacts
Abstract. Global climate models are important tools for understanding the climate system and how it is projected to evolve under scenario-driven emissions pathways. Their output is widely used in climate impacts research for modeling the current and future effects of climate change. However, climate model output remains coarse in relation to the high-resolution climate data needed for climate impacts studies, and it also exhibits biases relative to observational data. Treatment of the distribution tails is a key challenge in existing downscaled climate datasets available at a global scale; many of these datasets used quantile mapping techniques that were known to dampen or amplify trends in the tails. In this study, we apply the trend-preserving Quantile Delta Mapping (QDM) bias-adjustment method (Cannon et al., 2015) and develop a new downscaling method called the Quantile-Preserving Localized-Analog Downscaling (QPLAD) method that also preserves trends in the distribution tails. Both methods are integrated into a transparent and reproducible software pipeline, which we apply to global, daily model output for surface variables (maximum and minimum temperature and total precipitation) from the Coupled Model Intercomparison Project Phase 6 (CMIP6) experiments (O’Neill et al., 2016) for the historical experiment and four future emissions scenarios ranging from aggressive mitigation to no mitigation: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5 (Riahi et al., 2017). We use European Centre for Medium-RangeWeather Forecasts (ECMWF) ERA5 (Hersbach et al., 2018) temperature and precipitation reanalysis data as the reference dataset over the Sixth Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) reference period, 1995–2014. We produce bias-adjusted and downscaled data over the historical period (1950–2014) and for four emissions pathways (2015–2100) for 25 models in total. The output dataset of this study is the Global Downscaled Projections for Climate Impacts Research (GDPCIR), a global, daily, 0.25° horizontal-resolution product which is publicly hosted on Microsoft AI for Earth’s Planetary Computer (https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir/).
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4180 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4180 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2022-1513', Damien Irving, 22 Jan 2023
One of the most simple methods for creating "application ready" climate projection data is to calculate an array of adjustment factors representing the change in each quantile between an historical (e.g. 1995-2014) and future (e.g. 2045-2065) GCM (i.e. relatively coarse spatial resolution) simulation. Using daily GCM data you might calculate quantile changes for each month, so you end up with a 100 by 12 array of adjustment factors. You can then directly apply those adjustments to observational data (i.e. over 1995-2014) that is at a much higher spatial resolution, in order to produce a new higher resolution climate projection dataset (i.e. for 2045-2065). For instance, if the first day (e.g. 1 January 1995) in your observational dataset at a particular grid point is 20 degrees Celsius and that corresponds to the 30th percentile of January temperatures in the observations, you simply apply the 30th percentile GCM adjustment factor (after regridding the adjustment factors to the observational spatial grid) for January to that 20 degree day (to get the temperature for 1 January 2045). This is basically the method used in the latest climate projections for Australia: https://www.climatechangeinaustralia.gov.au/en/obtain-data/application-ready-data/scaling-methods/
I'd be interested to know if the authors think the approach I've decribed above would produce similar results to QDM-QPLAD, as it's not an approach canvassed in their introduction? QDM-QPLAD is significantly more complicated, so it would be interesting to hear the benefits of that added complexity.
Citation: https://doi.org/10.5194/egusphere-2022-1513-CC1 -
AC2: 'Reply on CC1', Diana R. Gergel, 19 Jun 2023
Thank you for this question, and for providing the link to the climate projections used in Australia.
It is difficult to speculate about how similar or different results from QDM-QPLAD would be from the approach outlined for the Australian climate projections given the many nonlinearities of each method. However, some first order differences in method suggest that at least at finer resolution and higher moments of the timeseries distribution the results could be significantly different. At the bias adjustment step, the quantile-quantile delta method the commenter describes assumes that the observations provide the daily weather variability, with its distribution corrected to the projected monthly GCM distribution. In contrast, the QDM method we use assumes daily weather variability comes from the GCM projection but its distribution is corrected to the historical daily observations (plus the change in quantiles from the GCM).
The choice of which time series provides the random weather variability (observations or GCM projections) and which distribution is the “baseline” or “truth” to correct to (observations or GCM) is somewhat subjective and dependent on use-case. We have chosen to optimize for maintaining the trends in the distributions, and therefore extremes, from the GCMs while making the starting (or baseline) distribution consistent with the observational distribution.
This optimization also guided our approach to downscaling. The QPLAD method was developed to also maintain trend preservation at the downscaling step. Therefore, we take a “local analogue” approach that chooses the actual day at a coarse resolution quantile in observations and computes adjustment factors between it and the same day at fine resolution – this guarantees that the resulting bias adjusted and downscaled GCM still maintains quantile trends from the GCM projection. We think this approach is, in principle, similar to regridding change factors as in the Australian method, however in practice, with the additional method and implementation differences, it is difficult to determine just how sensitive the results are to these two different downscaling approaches.
Finally, we apply the QDM-QPLAD method to each day in each year on a rolling basis to ensure no discontinuities across month and year boundaries in the results.
While these features introduce additional complexity in the QDM-QPLAD method -- e.g. daily bias adjustment and downscaling, adjusting each year using a rolling 20-year adjustment window, applying the QPLAD downscaling approach -- we believe the additional complexity has the benefit of allowing the user to estimate impacts on continuous time series of daily climate information with spatial resolution that has observationally-consistent spatial patterns, preserved trends in extremes, and projections that are consistent with a historical, observed baseline.
Citation: https://doi.org/10.5194/egusphere-2022-1513-AC2
-
AC2: 'Reply on CC1', Diana R. Gergel, 19 Jun 2023
-
RC1: 'Comment on egusphere-2022-1513', Anonymous Referee #1, 27 Feb 2023
Comments on "Global downscaled projections for climate impacts research (GCPCIR): preserving extremes for modeling future climate impacts," by Gergel, Malevich, McCusker, Tenezakis, Delgado, Fish, and Kopp, egusphere-2022-1513.
This manuscript obviously represents a very large amount of important work and is quite commendable for the usefulness of the data produced. Overall I am very positive about the project. I do have some major comments that need to be addressed before publication but that does not diminish the overall quality of the work in my mind.
Major comments
1. The manuscript emphasizes preserving extremes -- it's even right there in the title -- but evaluation of the extremes is given only a weak treatment in the manuscript. A better and more complete job of describing the extremes is necessary if, as we see here, the extremes are declared by the authors to be a key component of the project.
For example, Figures 3 and 4 only show 95th percentile values. That is about 3-4 days per year. The hottest 3-4 days in a year, the wettest 3-4 days, etc. The economic and societal importance of climate extremes are much more apparent at extremes that are less frequent than several times per year. For example, water management and flooding analyses routinely consider 1-in-100 *year* floods or precipitation events. The Pacific Northwest heat wave of 2021 has been estimated at a 1-in-multi-millennium event. Values that are routinely seen several times per year are not near the level that causes the big societal and economic impacts that this manuscript asserts that it is concerned with.
I request that the authors add to the main text or supplementary figures a comparison of how well their method preserves GCM-predicted trends at more extreme levels. For example, 1-in-10 year, 1-in-20 year, and/or 1-in-50 year extremes would be appropriate, especially for precipitation. Besides the trends, it would be useful to see what the actual values look like. The text describes an issue with some extreme values becoming unrealistic and steps taken to mitigate this. Illustrations and evaluations of these more important and impactful (than 95th ptile) extreme values are needed.
2. There is a rich literature on some of the issues addressed here, such as how to bias correct in a way that preserves the original model trends, that is not included in the current manuscript. These works should be appropriately cited since they are concerned with an important part of the submitted manuscript. The way it is now, it gives too much of an impression that the described work exists in a vacuum, which it does not.
For example, Michelangeli et al. 2009 (GRL vol 36 L11708, doi:10.1029/2009GL038401) addressed the issue of how to bias correct a model subject to climate change using the cumulative distribution function transformation (CDF-t) method; H. Li et al. 2010 (JGR vol 115, D10101, doi:10.1029/2009JD012882) described the fundamentals of equidistant quantile mapping some years before the Cannon reference you cite; Pierce et al. 2015 (J. Hydromet v. 16 p. 2421, doi:10.1175/JHM-D-14-0236.1) implemented the quantile trend-preserving bias correction for a large data set as well as comparing it to standard quantile mapping, etc. I'm sure you can find other examples, but overall I think the manuscript as written gives short shrift to the context in which this work was done.
Minor comments
* Line 98: Please list ensemble members for each model. Not all models have an "r1i1p1f1". Was only one ensemble member per model downscaled? Please state this explicitly.
* Line 323, Figure 2 caption: In the caption please explicitly state which variable is shown.
* Line 435: For precipitation, is this 95th ptile of all days, or only wet days? Please specify. As noted above, 95th percentile of all days, not even wet days, is not extreme enough to cause substantial societal or economic impacts.
* Line 448: The panels in Figure 4 are too small to be useful. Please redraft so that the panels are much larger.
* Line 448: It's hard to interpret Figure 4 because some of the areas of concern are in dry areas where the colorbar is just showing a dark blue that is hard to differentiate. It would be useful to add some figures to the supplementary information to address this, for example, an additional set of figures where only regions with p < 10 mm are shown (with their own colorbar). The question I want to answer is whether I should be concerned about the large misses in some locations as illustrated here, or is this confined to regions where there is only very little precipitation anyway. Between the tiny panel size, a colorbar where all values below 12 mm look the same, and only supplying JJA rather than the other seasons as well (which need to be added), I can't answer this important question. Results given for only one model, one scenario, and one season do not inspire general confidence in a large data set.
* Line 573, Figure 8: Why do some of the Y axes say "False"?
Citation: https://doi.org/10.5194/egusphere-2022-1513-RC1 -
AC3: 'Reply on RC1', Diana R. Gergel, 21 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2022-1513/egusphere-2022-1513-AC3-supplement.pdf
-
AC3: 'Reply on RC1', Diana R. Gergel, 21 Jul 2023
-
CC2: 'Comment on egusphere-2022-1513', Naomi Goldenson, 02 Mar 2023
This paper documents a valuable contribution to statistical downscaling of climate projections by building on trend-preserving bias-correction with an analog-based approach to fine-scale patterns. The novelty is in applying techniques that deserve to be more widely adopted in a well-documented, reproducible whole. One hopes that it will make it easier for users of statistical downscaling to compare across approaches, and apply approaches such as this one on their own.
My main question is whether anything is done about the coherence across spatial patterns at the boundaries of the larger grid-cells. Is some sort of spatial smoothing done if the fine-scale analogs in adjacent larger grid-cells on a given day have unphysical discontinuities at the boundaries? I didn't notice any mention of this, and a sentence or two about it would be helpful.
Citation: https://doi.org/10.5194/egusphere-2022-1513-CC2 -
AC1: 'Reply on CC2', Diana R. Gergel, 19 Jun 2023
Thanks for your positive comment and this question regarding coherence of spatial patterns. We did not do any spatial smoothing of our results in part because it would modify the trend preservation and in part because it would potentially dampen (ie. smooth out) extremes. We have added a sentence to Section 4.3.1 that states we did not perform any spatial smoothing.
Citation: https://doi.org/10.5194/egusphere-2022-1513-AC1
-
AC1: 'Reply on CC2', Diana R. Gergel, 19 Jun 2023
-
RC2: 'Comment on egusphere-2022-1513', Anonymous Referee #2, 22 Mar 2023
Comments to “Global downscaled projections for climate impacts research (GDPCIR): preserving extremes for modeling future climate impacts”, by Gergel et al.
This work presents a comprehensive assessment of a new global dataset based on bias adjustment and downscaling of CMIP6, which could be of great interest for the impacts community. The paper is well-written and methodological details are meticulously explained. Still I recommend to address some points before the manuscript is considered for publication.
Overall comments
ERA5 data are used as reference to bias-correct climate model output. Since the focus of the dataset is the impacts community and ERA5 is a reanalysis dataset, the appropriateness of ERA5 for impact studies should be discussed. Higher resolution than other observational-based products is an advantage, but was it evaluated compared to real observations (by the authors or in other works which could be cited)? Also note that bias adjustment methods which preserve trends for all quantiles (such as QDM) rely to a greater extent on the reference dataset used for calibration, thus presenting a larger sensitivity to the observations used for calibration (especially for precipitation) as shown by Casanueva et al. 2021. This issue related to uncertainties due to the reference dataset should be also discussed.
About the bias correction and downscaling methodology, preservation of the raw model signals is not always the preferred approach (e.g. if the model does not represent basic processes right or has biases in large-scale processes, its local-scale trends should not be trusted either, see Fig. 5 in Maraun et al. 2017). In this sense, some recent works show that bias adjustment can lead to more realistic climate change signals (e.g. by bringing the GCM closer to more realistic regional climate models, Casanueva et al. 2019) and to more plausible threshold-based indices e.g. in insular regions (Iturbide et al. 2022). Also, despite QDM preserves trends in all quantiles, they are not preserved for some moderate/extreme indices based on absolute threshold exceedances. In this sense, it is supported the use of simple and parsimonious bias adjusment methods (e.g. Iturbide et al. 2022). Did authors try other simpler bias adjustment methods together with the downscaling methodology? It is recommended to include some discussion along these points.
Regarding the potential use of these data by the impact community, many impacts require other variables beyond temperatures and precipitation, which other initiatives as ISIMIP provide (Lange 2019). This limitation should be mentioned.
Regarding the GDPCIR dataset, it is written that it “is publicly hosted on the Microsoft Planetary Computer”. Does it mean that is is publicly available? Does one need to have an account and/or pay for this service? If that is the case their use by the impacts community is not very straightforward. Why using this system instead of other completely free services, more aligned to the research community? Also, are the files following CF standards for metadata? All this is relevant and should be explained.
Regarding the pipeline, some indication of the running time and resources would be nice to have.
Specific comments
L9 Here it seems that they use these two methods as independent ones, but later I found that QDM is used for BA and QPLAD for downscaling one after the other, thus, they are used together. Please clarify this in the abstract.
L34-37 Downscaling and adjustment are a bit mixed in these lines. BA is a mere correction and can be applied even with no resolution mismatch between model and observations. Statistical downscaling comprises also other statistical approaches which actually transfer large-scale information into the local scale (see Maraun and Widmann 2018, Gutierrez et al. 2019), some of which quite sophisticated (Baño-Medina et al. 2021). BA includes an implicit downscaling step if the resolution of the reference data is higher than that in the model, but it is not purely a statistical downscaling method.
L38, 51, 156, 158, 165, 170, 220, 221. Better to use “model” than “GCM” in these descriptive lines since BA can be applied to any (global or regional) climate models.
L41, 150 What do you mean by “standard”? The variety of QM is huge, better to add whether empirical or parametric or a reference to guide the reader a bit more.
L53, 54 In Lange (2019) the final resolution of 0.5º is due to the reference data used, not to the BA-SD methodology itself (at the line seems to suggest). I mean, too coarsely resolved BA-projections are due to limited availability of high resolution global datasets (before ERA5, 0.5º was the best one could get on global scale), regardless of the BA or SD applied. It is true that this is a limitation for the use of such projections in impact studies, but the sentence could be better phrased. Note also that 1) CMIP6 models corrected with the BA-SD by Lange 2019 are used in a large intercomparison project devoted to impacts research: ISIMIP), and 2) large resolution mismatch between model and observations is also problematic (Maraun 2013) thus, in general, high-resolution reference dataset is not the solution as long as model data are still too coarse. Also in line 54 “this effect …. dampens or amplifies...”, I do not see how coarse resolution dampens or amplifies trends in the tails, the authors may be referring to the BA or SD methodology not resolution per se.
L59 “Several CMIP6 downscaling datasets”, I think the authors mean “Several CMIP6 downscaled datasets”.
L62 “has made a CMIP6 dataset available”, I think the authors mean “has made a bias-adjusted CMIP6 dataset available”, or similar, because CMIP6 is available from other public sources (ESGF).
L65 It is worth to higlight that ISIMIP provides a large number of variables, not limited to temperatures and precipitation, since many impact studies require them, and this is an important advantage.
L69 “and no longer widely used”, I think “is” is missing.
L79 Where? Make reference to the appropriate section.
L97 I think brackets are missing for the reference.
L99 How many ESGF models where missed? Is the selected subset retaining the ensemble spread of all CMIP6?
L119 Why the 5 extra days in 360-day calendars where filled with the average of adjacent days instead of leaving them as NaN? Averaging could be fine for temperature, but what about precipitation? I checked Pierce et al. 2014 and did not find this filling procedure.
Table 1. I suggest to write the information on the SSPs in a more easy-reading way, with four columns and X denoting the SSPs for each model (in rows). Please also add simulation run.
L178 Please clarify what is meant with “Traditional downscaling methods”. As said before, typical statistical downscaling builds empirical relationships between large-scale variables (predictors) and local scale predictands (e.g. by means of a linear model), which is not a difference or ratio.
L187 Which one is the coarse resolution? This information needs to be included in this section, now is found for the first time in line 222. I was wondering which method was used for interpolation and found this information in Sect. 4.1. I would suggest to refer here to Sect. 4.1 with further details.
L189 Please mention here QDM explicitly, maybe in brackets, to guide the reader.
L222 “adjustment to both”, I think the authors mean that the GCM wet-day frequency was adjusted to the observed counterpart, otherwise both are adjusted to what? Also, although less common, it is also a problem when the models are much drier than observation. For this Themeßl et al. 2012 introduced the frequency adaptation. Is there any way you account for such dry biases?
L252 Why did not the authors use conservative for temperatures as well?
L257-258 “using the regridding method described above” and “using the same regridding methods as in the GCM output” seem to be something different but they refer to the same, right? Please rephrase.
L267 “100 equally spaced quantiles” Do the authors work with 100 percentiles, then? In line 190 it was said that the number of quantiles is equal to the number of timesteps (20x31), as one would expect as Cannon’s QDM works with all quantiles (default option), if I am right. Please clarify.
L292 Shouldn’t be “percentiles” instead of “quantiles”?
L294 Please explain how the method deals with new extremes, i.e. quantiles of the future period which were not reached in the reference dataset (see different extrapolations in Themeßl et al. 2012).
L315-316 Is the description of the 2b panel right? It seems to represent the adjustment factor per day of the year and quantile (for the 0.25º gridbox over Miami), I do not see how spatial analogs are shown. The term “spatial analogs” is quite confusing, since for Miami the downscaled value is obtained through the adjustment function calculated between the 1º gridbox and the 0.25º gridbox over Miami, right? As far as I understood the other nearby 0.25º gridboxes do not affect downscaling for Miami, thus “spatial analogs” should be better phrased or clarified. Also how is the analog for each quantile selected from the 620 possible analogs? The mean? Randomly?
L338 I guess this unrealistic values come from the adjustment factors by QDM (please add it e.g. in brackets) since adjustment factors are also applied in the downscaling step.
L342 “that this” I think one should be removed, otherwise I do not understand the sentence.
L414-415 Not sure what this sentence means. Of course results should be consistent with the described methodology. Moreover, is the reference to Fig.2 right?
L418 I was wondering here which data were used in Fig.3b, bias-corrected or downscaled? From this comment about more extremes in higher resolution, it seemed to be downscaled, but then I saw the reference to Fig.A2. Since this confusion comes from time to time, please try to be very clear, e.g. add “after QDM” if you are discussing both but want to refer to bias-corrected only. Also “biascorrected -model” in the figure title should rather be “biascorrected – raw model”.
L427 Was this behaviour also present in other GCMs? Do the same conclusions about accentuating the Artic amplification hold for other GCMs? Are changes similar? How robust is this result? I would suggest to show this result (Figs. 3 and 4) for the multi-model ensemble median/mean in the supplementary material.
L439 “and” is missing before the second ratio.
Fig.4 caption and titles. Please refer to “raw model” instead of “model”.
L472 In fact, summer days, tropical nights, and annual wet days depend on thresholds. Why is the opposite mentioned? It is precisely in these indices where one can find a fair evaluation of QDM, since it preserves trends in quantiles.
L481 Null hypothesis should be rejected if p-value<0.05 and not rejected if p-value >0.05, thus it should be >. Do you use K-S for distributions of annual indices, thus 10, 15, 20 years only? Aren’t they too few data to fit distributions?
Table 2. I do not see the rationale behind the order of the indices within the table. They could go from mean, to moderate and extreme or ordered by input variable. What’s the interest of days above 90ºF? To my knowledge that is not an ETCDII index. Also, please use (or add in a new column) the ETCDDI nomenclature in the table, e.g. tropical nights for “tn_days_above”. Otherwise, what is the index column representing?
Sect. 5.2.1 Why showing results for Miami? GCMs do not represent correctly coastlines, especially those with coarse resolution. Is it a land gridbox in all models original resolution? Do you apply any land-sea mask?
L511 Is here bias adjustment referring also to downscaling?
L517 Please mention that multiplicative factors are used for precipitation.
L520 The error in Eq. 7 is calculated as the difference (in absolute value) between the climate change signal in bias-adjusted data and raw data. Climate change signals are usually calculated over 20-yr periods or more, so I do not see how the error is calculated on annual basis. Furthermore, the error in Eq. 6 between bias-adjusted model and reference should not be calculated annually, because climate model simulations do not have a day-to-day nor year-to-year correspondence with observations, thus the error should be calculated using 20-year periods.
Fig.6 It would be convenient to display the first panel with aspect ratio of 1:1. Also, it would help the interpretation of the results to have information about the error (Eq. 6) or bias in the raw simulation and bias-adjusted (only QDM). One idea could be to have another raw of panels with the scatter plots for these quantities. About these results, it is a bit of an issue that for temperatures the error in present climate of bias-adjusted data (X axis) is larger than the modification of the change signal (Y axis). Is then the modification justified? This should be at least discussed.
L546 Please mention that these regions are considered for the cities in the previous section. This information was first found in Fig. 7 caption.
L558 What is meant by Gaussian interpolation?
Figure 8. Correct “False” in Y-axis.
Figure A1: is it first mentioned in the conclusions? Then why is not A2 the first one?
References
Baño-Medina, J., Manzanas, R. & Gutiérrez, J.M. On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections. Clim Dyn 57, 2941–2951 (2021). https://doi.org/10.1007/s00382-021-05847-0
Casanueva, A., Kotlarski, S., Herrera, S., Fischer, A. M., Kjellstrom, T., and Schwierz, C.: Climate projections of a multivariate heat stress index: the role of downscaling and bias correction, Geosci. Model Dev., 12, 3419–3438, https://doi.org/10.5194/gmd-12-3419-2019, 2019.
Casanueva, A, Herrera, S, Iturbide, M, et al. Testing bias adjustment methods for regional climate change applications under observational uncertainty and resolution mismatch. Atmos Sci Lett. 2020; 21: 21:e978. https://doi.org/10.1002/asl.978
Gutiérrez, JM, Maraun, D, Widmann, M, et al. An intercomparison of a large ensemble of statistical downscaling methods over Europe: Results from the VALUE perfect predictor cross-validation experiment. Int. J. Climatol. 2019; 39: 3750– 3785. https://doi.org/10.1002/joc.5462
Iturbide, M., Casanueva, A., Bedia, J., Herrera, S., Milovac, J., & Gutiérrez, J. M. (2022). On the need of bias adjustment for more plausible climate change projections of extreme heat. Atmospheric Science Letters, 23( 2), e1072. https://doi.org/10.1002/asl.1072
Lange, S.: Trend-preserving bias adjustment and statistical downscaling with ISIMIP3BASD (v1.0), Geoscientific Model Development, 12, 3055–3070, https://doi.org/10.5194/gmd-12-3055-2019, publisher: Copernicus GmbH, 2019.
Maraun, D., 2013: Bias Correction, Quantile Mapping, and Downscaling: Revisiting the Inflation Issue. J. Climate, 26, 2137–2143, https://doi.org/10.1175/JCLI-D-12-00821.1.
Maraun, D. and Widmann, M.: Statistical Downscaling and Bias Correction for Climate Research, Cambridge University Press, Cambridge, https://doi.org/10.1017/9781107588783, 2018.
Maraun, D., Shepherd, T., Widmann, M. et al. Towards process-informed bias correction of climate change simulations. Nature Clim Change 7, 764–773 (2017). https://doi.org/10.1038/nclimate3418
Themeßl, M.J., Gobiet, A. & Heinrich, G. Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Climatic Change 112, 449–468 (2012). https://doi.org/10.1007/s10584-011-0224-4
Citation: https://doi.org/10.5194/egusphere-2022-1513-RC2 -
AC4: 'Reply on RC2', Diana R. Gergel, 21 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2022-1513/egusphere-2022-1513-AC4-supplement.pdf
-
AC4: 'Reply on RC2', Diana R. Gergel, 21 Jul 2023
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2022-1513', Damien Irving, 22 Jan 2023
One of the most simple methods for creating "application ready" climate projection data is to calculate an array of adjustment factors representing the change in each quantile between an historical (e.g. 1995-2014) and future (e.g. 2045-2065) GCM (i.e. relatively coarse spatial resolution) simulation. Using daily GCM data you might calculate quantile changes for each month, so you end up with a 100 by 12 array of adjustment factors. You can then directly apply those adjustments to observational data (i.e. over 1995-2014) that is at a much higher spatial resolution, in order to produce a new higher resolution climate projection dataset (i.e. for 2045-2065). For instance, if the first day (e.g. 1 January 1995) in your observational dataset at a particular grid point is 20 degrees Celsius and that corresponds to the 30th percentile of January temperatures in the observations, you simply apply the 30th percentile GCM adjustment factor (after regridding the adjustment factors to the observational spatial grid) for January to that 20 degree day (to get the temperature for 1 January 2045). This is basically the method used in the latest climate projections for Australia: https://www.climatechangeinaustralia.gov.au/en/obtain-data/application-ready-data/scaling-methods/
I'd be interested to know if the authors think the approach I've decribed above would produce similar results to QDM-QPLAD, as it's not an approach canvassed in their introduction? QDM-QPLAD is significantly more complicated, so it would be interesting to hear the benefits of that added complexity.
Citation: https://doi.org/10.5194/egusphere-2022-1513-CC1 -
AC2: 'Reply on CC1', Diana R. Gergel, 19 Jun 2023
Thank you for this question, and for providing the link to the climate projections used in Australia.
It is difficult to speculate about how similar or different results from QDM-QPLAD would be from the approach outlined for the Australian climate projections given the many nonlinearities of each method. However, some first order differences in method suggest that at least at finer resolution and higher moments of the timeseries distribution the results could be significantly different. At the bias adjustment step, the quantile-quantile delta method the commenter describes assumes that the observations provide the daily weather variability, with its distribution corrected to the projected monthly GCM distribution. In contrast, the QDM method we use assumes daily weather variability comes from the GCM projection but its distribution is corrected to the historical daily observations (plus the change in quantiles from the GCM).
The choice of which time series provides the random weather variability (observations or GCM projections) and which distribution is the “baseline” or “truth” to correct to (observations or GCM) is somewhat subjective and dependent on use-case. We have chosen to optimize for maintaining the trends in the distributions, and therefore extremes, from the GCMs while making the starting (or baseline) distribution consistent with the observational distribution.
This optimization also guided our approach to downscaling. The QPLAD method was developed to also maintain trend preservation at the downscaling step. Therefore, we take a “local analogue” approach that chooses the actual day at a coarse resolution quantile in observations and computes adjustment factors between it and the same day at fine resolution – this guarantees that the resulting bias adjusted and downscaled GCM still maintains quantile trends from the GCM projection. We think this approach is, in principle, similar to regridding change factors as in the Australian method, however in practice, with the additional method and implementation differences, it is difficult to determine just how sensitive the results are to these two different downscaling approaches.
Finally, we apply the QDM-QPLAD method to each day in each year on a rolling basis to ensure no discontinuities across month and year boundaries in the results.
While these features introduce additional complexity in the QDM-QPLAD method -- e.g. daily bias adjustment and downscaling, adjusting each year using a rolling 20-year adjustment window, applying the QPLAD downscaling approach -- we believe the additional complexity has the benefit of allowing the user to estimate impacts on continuous time series of daily climate information with spatial resolution that has observationally-consistent spatial patterns, preserved trends in extremes, and projections that are consistent with a historical, observed baseline.
Citation: https://doi.org/10.5194/egusphere-2022-1513-AC2
-
AC2: 'Reply on CC1', Diana R. Gergel, 19 Jun 2023
-
RC1: 'Comment on egusphere-2022-1513', Anonymous Referee #1, 27 Feb 2023
Comments on "Global downscaled projections for climate impacts research (GCPCIR): preserving extremes for modeling future climate impacts," by Gergel, Malevich, McCusker, Tenezakis, Delgado, Fish, and Kopp, egusphere-2022-1513.
This manuscript obviously represents a very large amount of important work and is quite commendable for the usefulness of the data produced. Overall I am very positive about the project. I do have some major comments that need to be addressed before publication but that does not diminish the overall quality of the work in my mind.
Major comments
1. The manuscript emphasizes preserving extremes -- it's even right there in the title -- but evaluation of the extremes is given only a weak treatment in the manuscript. A better and more complete job of describing the extremes is necessary if, as we see here, the extremes are declared by the authors to be a key component of the project.
For example, Figures 3 and 4 only show 95th percentile values. That is about 3-4 days per year. The hottest 3-4 days in a year, the wettest 3-4 days, etc. The economic and societal importance of climate extremes are much more apparent at extremes that are less frequent than several times per year. For example, water management and flooding analyses routinely consider 1-in-100 *year* floods or precipitation events. The Pacific Northwest heat wave of 2021 has been estimated at a 1-in-multi-millennium event. Values that are routinely seen several times per year are not near the level that causes the big societal and economic impacts that this manuscript asserts that it is concerned with.
I request that the authors add to the main text or supplementary figures a comparison of how well their method preserves GCM-predicted trends at more extreme levels. For example, 1-in-10 year, 1-in-20 year, and/or 1-in-50 year extremes would be appropriate, especially for precipitation. Besides the trends, it would be useful to see what the actual values look like. The text describes an issue with some extreme values becoming unrealistic and steps taken to mitigate this. Illustrations and evaluations of these more important and impactful (than 95th ptile) extreme values are needed.
2. There is a rich literature on some of the issues addressed here, such as how to bias correct in a way that preserves the original model trends, that is not included in the current manuscript. These works should be appropriately cited since they are concerned with an important part of the submitted manuscript. The way it is now, it gives too much of an impression that the described work exists in a vacuum, which it does not.
For example, Michelangeli et al. 2009 (GRL vol 36 L11708, doi:10.1029/2009GL038401) addressed the issue of how to bias correct a model subject to climate change using the cumulative distribution function transformation (CDF-t) method; H. Li et al. 2010 (JGR vol 115, D10101, doi:10.1029/2009JD012882) described the fundamentals of equidistant quantile mapping some years before the Cannon reference you cite; Pierce et al. 2015 (J. Hydromet v. 16 p. 2421, doi:10.1175/JHM-D-14-0236.1) implemented the quantile trend-preserving bias correction for a large data set as well as comparing it to standard quantile mapping, etc. I'm sure you can find other examples, but overall I think the manuscript as written gives short shrift to the context in which this work was done.
Minor comments
* Line 98: Please list ensemble members for each model. Not all models have an "r1i1p1f1". Was only one ensemble member per model downscaled? Please state this explicitly.
* Line 323, Figure 2 caption: In the caption please explicitly state which variable is shown.
* Line 435: For precipitation, is this 95th ptile of all days, or only wet days? Please specify. As noted above, 95th percentile of all days, not even wet days, is not extreme enough to cause substantial societal or economic impacts.
* Line 448: The panels in Figure 4 are too small to be useful. Please redraft so that the panels are much larger.
* Line 448: It's hard to interpret Figure 4 because some of the areas of concern are in dry areas where the colorbar is just showing a dark blue that is hard to differentiate. It would be useful to add some figures to the supplementary information to address this, for example, an additional set of figures where only regions with p < 10 mm are shown (with their own colorbar). The question I want to answer is whether I should be concerned about the large misses in some locations as illustrated here, or is this confined to regions where there is only very little precipitation anyway. Between the tiny panel size, a colorbar where all values below 12 mm look the same, and only supplying JJA rather than the other seasons as well (which need to be added), I can't answer this important question. Results given for only one model, one scenario, and one season do not inspire general confidence in a large data set.
* Line 573, Figure 8: Why do some of the Y axes say "False"?
Citation: https://doi.org/10.5194/egusphere-2022-1513-RC1 -
AC3: 'Reply on RC1', Diana R. Gergel, 21 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2022-1513/egusphere-2022-1513-AC3-supplement.pdf
-
AC3: 'Reply on RC1', Diana R. Gergel, 21 Jul 2023
-
CC2: 'Comment on egusphere-2022-1513', Naomi Goldenson, 02 Mar 2023
This paper documents a valuable contribution to statistical downscaling of climate projections by building on trend-preserving bias-correction with an analog-based approach to fine-scale patterns. The novelty is in applying techniques that deserve to be more widely adopted in a well-documented, reproducible whole. One hopes that it will make it easier for users of statistical downscaling to compare across approaches, and apply approaches such as this one on their own.
My main question is whether anything is done about the coherence across spatial patterns at the boundaries of the larger grid-cells. Is some sort of spatial smoothing done if the fine-scale analogs in adjacent larger grid-cells on a given day have unphysical discontinuities at the boundaries? I didn't notice any mention of this, and a sentence or two about it would be helpful.
Citation: https://doi.org/10.5194/egusphere-2022-1513-CC2 -
AC1: 'Reply on CC2', Diana R. Gergel, 19 Jun 2023
Thanks for your positive comment and this question regarding coherence of spatial patterns. We did not do any spatial smoothing of our results in part because it would modify the trend preservation and in part because it would potentially dampen (ie. smooth out) extremes. We have added a sentence to Section 4.3.1 that states we did not perform any spatial smoothing.
Citation: https://doi.org/10.5194/egusphere-2022-1513-AC1
-
AC1: 'Reply on CC2', Diana R. Gergel, 19 Jun 2023
-
RC2: 'Comment on egusphere-2022-1513', Anonymous Referee #2, 22 Mar 2023
Comments to “Global downscaled projections for climate impacts research (GDPCIR): preserving extremes for modeling future climate impacts”, by Gergel et al.
This work presents a comprehensive assessment of a new global dataset based on bias adjustment and downscaling of CMIP6, which could be of great interest for the impacts community. The paper is well-written and methodological details are meticulously explained. Still I recommend to address some points before the manuscript is considered for publication.
Overall comments
ERA5 data are used as reference to bias-correct climate model output. Since the focus of the dataset is the impacts community and ERA5 is a reanalysis dataset, the appropriateness of ERA5 for impact studies should be discussed. Higher resolution than other observational-based products is an advantage, but was it evaluated compared to real observations (by the authors or in other works which could be cited)? Also note that bias adjustment methods which preserve trends for all quantiles (such as QDM) rely to a greater extent on the reference dataset used for calibration, thus presenting a larger sensitivity to the observations used for calibration (especially for precipitation) as shown by Casanueva et al. 2021. This issue related to uncertainties due to the reference dataset should be also discussed.
About the bias correction and downscaling methodology, preservation of the raw model signals is not always the preferred approach (e.g. if the model does not represent basic processes right or has biases in large-scale processes, its local-scale trends should not be trusted either, see Fig. 5 in Maraun et al. 2017). In this sense, some recent works show that bias adjustment can lead to more realistic climate change signals (e.g. by bringing the GCM closer to more realistic regional climate models, Casanueva et al. 2019) and to more plausible threshold-based indices e.g. in insular regions (Iturbide et al. 2022). Also, despite QDM preserves trends in all quantiles, they are not preserved for some moderate/extreme indices based on absolute threshold exceedances. In this sense, it is supported the use of simple and parsimonious bias adjusment methods (e.g. Iturbide et al. 2022). Did authors try other simpler bias adjustment methods together with the downscaling methodology? It is recommended to include some discussion along these points.
Regarding the potential use of these data by the impact community, many impacts require other variables beyond temperatures and precipitation, which other initiatives as ISIMIP provide (Lange 2019). This limitation should be mentioned.
Regarding the GDPCIR dataset, it is written that it “is publicly hosted on the Microsoft Planetary Computer”. Does it mean that is is publicly available? Does one need to have an account and/or pay for this service? If that is the case their use by the impacts community is not very straightforward. Why using this system instead of other completely free services, more aligned to the research community? Also, are the files following CF standards for metadata? All this is relevant and should be explained.
Regarding the pipeline, some indication of the running time and resources would be nice to have.
Specific comments
L9 Here it seems that they use these two methods as independent ones, but later I found that QDM is used for BA and QPLAD for downscaling one after the other, thus, they are used together. Please clarify this in the abstract.
L34-37 Downscaling and adjustment are a bit mixed in these lines. BA is a mere correction and can be applied even with no resolution mismatch between model and observations. Statistical downscaling comprises also other statistical approaches which actually transfer large-scale information into the local scale (see Maraun and Widmann 2018, Gutierrez et al. 2019), some of which quite sophisticated (Baño-Medina et al. 2021). BA includes an implicit downscaling step if the resolution of the reference data is higher than that in the model, but it is not purely a statistical downscaling method.
L38, 51, 156, 158, 165, 170, 220, 221. Better to use “model” than “GCM” in these descriptive lines since BA can be applied to any (global or regional) climate models.
L41, 150 What do you mean by “standard”? The variety of QM is huge, better to add whether empirical or parametric or a reference to guide the reader a bit more.
L53, 54 In Lange (2019) the final resolution of 0.5º is due to the reference data used, not to the BA-SD methodology itself (at the line seems to suggest). I mean, too coarsely resolved BA-projections are due to limited availability of high resolution global datasets (before ERA5, 0.5º was the best one could get on global scale), regardless of the BA or SD applied. It is true that this is a limitation for the use of such projections in impact studies, but the sentence could be better phrased. Note also that 1) CMIP6 models corrected with the BA-SD by Lange 2019 are used in a large intercomparison project devoted to impacts research: ISIMIP), and 2) large resolution mismatch between model and observations is also problematic (Maraun 2013) thus, in general, high-resolution reference dataset is not the solution as long as model data are still too coarse. Also in line 54 “this effect …. dampens or amplifies...”, I do not see how coarse resolution dampens or amplifies trends in the tails, the authors may be referring to the BA or SD methodology not resolution per se.
L59 “Several CMIP6 downscaling datasets”, I think the authors mean “Several CMIP6 downscaled datasets”.
L62 “has made a CMIP6 dataset available”, I think the authors mean “has made a bias-adjusted CMIP6 dataset available”, or similar, because CMIP6 is available from other public sources (ESGF).
L65 It is worth to higlight that ISIMIP provides a large number of variables, not limited to temperatures and precipitation, since many impact studies require them, and this is an important advantage.
L69 “and no longer widely used”, I think “is” is missing.
L79 Where? Make reference to the appropriate section.
L97 I think brackets are missing for the reference.
L99 How many ESGF models where missed? Is the selected subset retaining the ensemble spread of all CMIP6?
L119 Why the 5 extra days in 360-day calendars where filled with the average of adjacent days instead of leaving them as NaN? Averaging could be fine for temperature, but what about precipitation? I checked Pierce et al. 2014 and did not find this filling procedure.
Table 1. I suggest to write the information on the SSPs in a more easy-reading way, with four columns and X denoting the SSPs for each model (in rows). Please also add simulation run.
L178 Please clarify what is meant with “Traditional downscaling methods”. As said before, typical statistical downscaling builds empirical relationships between large-scale variables (predictors) and local scale predictands (e.g. by means of a linear model), which is not a difference or ratio.
L187 Which one is the coarse resolution? This information needs to be included in this section, now is found for the first time in line 222. I was wondering which method was used for interpolation and found this information in Sect. 4.1. I would suggest to refer here to Sect. 4.1 with further details.
L189 Please mention here QDM explicitly, maybe in brackets, to guide the reader.
L222 “adjustment to both”, I think the authors mean that the GCM wet-day frequency was adjusted to the observed counterpart, otherwise both are adjusted to what? Also, although less common, it is also a problem when the models are much drier than observation. For this Themeßl et al. 2012 introduced the frequency adaptation. Is there any way you account for such dry biases?
L252 Why did not the authors use conservative for temperatures as well?
L257-258 “using the regridding method described above” and “using the same regridding methods as in the GCM output” seem to be something different but they refer to the same, right? Please rephrase.
L267 “100 equally spaced quantiles” Do the authors work with 100 percentiles, then? In line 190 it was said that the number of quantiles is equal to the number of timesteps (20x31), as one would expect as Cannon’s QDM works with all quantiles (default option), if I am right. Please clarify.
L292 Shouldn’t be “percentiles” instead of “quantiles”?
L294 Please explain how the method deals with new extremes, i.e. quantiles of the future period which were not reached in the reference dataset (see different extrapolations in Themeßl et al. 2012).
L315-316 Is the description of the 2b panel right? It seems to represent the adjustment factor per day of the year and quantile (for the 0.25º gridbox over Miami), I do not see how spatial analogs are shown. The term “spatial analogs” is quite confusing, since for Miami the downscaled value is obtained through the adjustment function calculated between the 1º gridbox and the 0.25º gridbox over Miami, right? As far as I understood the other nearby 0.25º gridboxes do not affect downscaling for Miami, thus “spatial analogs” should be better phrased or clarified. Also how is the analog for each quantile selected from the 620 possible analogs? The mean? Randomly?
L338 I guess this unrealistic values come from the adjustment factors by QDM (please add it e.g. in brackets) since adjustment factors are also applied in the downscaling step.
L342 “that this” I think one should be removed, otherwise I do not understand the sentence.
L414-415 Not sure what this sentence means. Of course results should be consistent with the described methodology. Moreover, is the reference to Fig.2 right?
L418 I was wondering here which data were used in Fig.3b, bias-corrected or downscaled? From this comment about more extremes in higher resolution, it seemed to be downscaled, but then I saw the reference to Fig.A2. Since this confusion comes from time to time, please try to be very clear, e.g. add “after QDM” if you are discussing both but want to refer to bias-corrected only. Also “biascorrected -model” in the figure title should rather be “biascorrected – raw model”.
L427 Was this behaviour also present in other GCMs? Do the same conclusions about accentuating the Artic amplification hold for other GCMs? Are changes similar? How robust is this result? I would suggest to show this result (Figs. 3 and 4) for the multi-model ensemble median/mean in the supplementary material.
L439 “and” is missing before the second ratio.
Fig.4 caption and titles. Please refer to “raw model” instead of “model”.
L472 In fact, summer days, tropical nights, and annual wet days depend on thresholds. Why is the opposite mentioned? It is precisely in these indices where one can find a fair evaluation of QDM, since it preserves trends in quantiles.
L481 Null hypothesis should be rejected if p-value<0.05 and not rejected if p-value >0.05, thus it should be >. Do you use K-S for distributions of annual indices, thus 10, 15, 20 years only? Aren’t they too few data to fit distributions?
Table 2. I do not see the rationale behind the order of the indices within the table. They could go from mean, to moderate and extreme or ordered by input variable. What’s the interest of days above 90ºF? To my knowledge that is not an ETCDII index. Also, please use (or add in a new column) the ETCDDI nomenclature in the table, e.g. tropical nights for “tn_days_above”. Otherwise, what is the index column representing?
Sect. 5.2.1 Why showing results for Miami? GCMs do not represent correctly coastlines, especially those with coarse resolution. Is it a land gridbox in all models original resolution? Do you apply any land-sea mask?
L511 Is here bias adjustment referring also to downscaling?
L517 Please mention that multiplicative factors are used for precipitation.
L520 The error in Eq. 7 is calculated as the difference (in absolute value) between the climate change signal in bias-adjusted data and raw data. Climate change signals are usually calculated over 20-yr periods or more, so I do not see how the error is calculated on annual basis. Furthermore, the error in Eq. 6 between bias-adjusted model and reference should not be calculated annually, because climate model simulations do not have a day-to-day nor year-to-year correspondence with observations, thus the error should be calculated using 20-year periods.
Fig.6 It would be convenient to display the first panel with aspect ratio of 1:1. Also, it would help the interpretation of the results to have information about the error (Eq. 6) or bias in the raw simulation and bias-adjusted (only QDM). One idea could be to have another raw of panels with the scatter plots for these quantities. About these results, it is a bit of an issue that for temperatures the error in present climate of bias-adjusted data (X axis) is larger than the modification of the change signal (Y axis). Is then the modification justified? This should be at least discussed.
L546 Please mention that these regions are considered for the cities in the previous section. This information was first found in Fig. 7 caption.
L558 What is meant by Gaussian interpolation?
Figure 8. Correct “False” in Y-axis.
Figure A1: is it first mentioned in the conclusions? Then why is not A2 the first one?
References
Baño-Medina, J., Manzanas, R. & Gutiérrez, J.M. On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections. Clim Dyn 57, 2941–2951 (2021). https://doi.org/10.1007/s00382-021-05847-0
Casanueva, A., Kotlarski, S., Herrera, S., Fischer, A. M., Kjellstrom, T., and Schwierz, C.: Climate projections of a multivariate heat stress index: the role of downscaling and bias correction, Geosci. Model Dev., 12, 3419–3438, https://doi.org/10.5194/gmd-12-3419-2019, 2019.
Casanueva, A, Herrera, S, Iturbide, M, et al. Testing bias adjustment methods for regional climate change applications under observational uncertainty and resolution mismatch. Atmos Sci Lett. 2020; 21: 21:e978. https://doi.org/10.1002/asl.978
Gutiérrez, JM, Maraun, D, Widmann, M, et al. An intercomparison of a large ensemble of statistical downscaling methods over Europe: Results from the VALUE perfect predictor cross-validation experiment. Int. J. Climatol. 2019; 39: 3750– 3785. https://doi.org/10.1002/joc.5462
Iturbide, M., Casanueva, A., Bedia, J., Herrera, S., Milovac, J., & Gutiérrez, J. M. (2022). On the need of bias adjustment for more plausible climate change projections of extreme heat. Atmospheric Science Letters, 23( 2), e1072. https://doi.org/10.1002/asl.1072
Lange, S.: Trend-preserving bias adjustment and statistical downscaling with ISIMIP3BASD (v1.0), Geoscientific Model Development, 12, 3055–3070, https://doi.org/10.5194/gmd-12-3055-2019, publisher: Copernicus GmbH, 2019.
Maraun, D., 2013: Bias Correction, Quantile Mapping, and Downscaling: Revisiting the Inflation Issue. J. Climate, 26, 2137–2143, https://doi.org/10.1175/JCLI-D-12-00821.1.
Maraun, D. and Widmann, M.: Statistical Downscaling and Bias Correction for Climate Research, Cambridge University Press, Cambridge, https://doi.org/10.1017/9781107588783, 2018.
Maraun, D., Shepherd, T., Widmann, M. et al. Towards process-informed bias correction of climate change simulations. Nature Clim Change 7, 764–773 (2017). https://doi.org/10.1038/nclimate3418
Themeßl, M.J., Gobiet, A. & Heinrich, G. Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Climatic Change 112, 449–468 (2012). https://doi.org/10.1007/s10584-011-0224-4
Citation: https://doi.org/10.5194/egusphere-2022-1513-RC2 -
AC4: 'Reply on RC2', Diana R. Gergel, 21 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2022-1513/egusphere-2022-1513-AC4-supplement.pdf
-
AC4: 'Reply on RC2', Diana R. Gergel, 21 Jul 2023
Peer review completion
Journal article(s) based on this preprint
Data sets
CIL Global Downscaled Projections for Climate Impacts Research Diana R. Gergel, Steven B. Malevich, Kelly E. McCusker, Emile Tenezakis, Meredith Fish, Michael Delgado, Robert Kopp https://planetarycomputer.microsoft.com/dataset/group/cil-gdpcir
Model code and software
R/CIL GDPCIR dataset codebase Diana Gergel, Kelly McCusker, Brewster Malevich, Emile Tenezakis, Meredith Fish, Michael Delgado https://zenodo.org/record/6403794#.Y6t4sezMJAc
Dodola codebase Brewster Malevich; Diana Gergel; Emile Tenezakis; Michael Delgado https://zenodo.org/record/6383442#.Y6t5Y-zMJAc
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
1,298 | 566 | 40 | 1,904 | 24 | 17 |
- HTML: 1,298
- PDF: 566
- XML: 40
- Total: 1,904
- BibTeX: 24
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Steven B. Malevich
Kelly E. McCusker
Emile Tenezakis
Michael T. Delgado
Meredith A. Fish
Robert E. Kopp
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4180 KB) - Metadata XML