the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Large Uncertainty in Observed Meridional Stream Function Tropical Expansion
Abstract. Recent tropical expansion rate estimates vary substantially, as a multitude of methods and reanalysis datasets yield conflicting results. Among the many methods of estimating the tropical width, the meridional stream function 500 hPa zero-crossing is the most widely used, as it is directly related to the poleward edge of the Hadley Cell (HC). Other common metrics use atmospheric phenomena associated with the HC as a proxy, for instance the zonal surface wind zero-crossing. As each of these metrics require different data, each with varying error, the level of data-driven uncertainty differs between each metric. While previous work has analyzed the statistical and dynamical relationships between metrics, to date no study has quantified and compared the uncertainty in different HC metrics. In this study, we use ERA5 ensemble members, which include small perturbations in atmospheric variables based on the data error, to quantify the uncertainty associated with six commonly used HC metrics as well as the range of their trend estimates. In the Northern Hemisphere, the tropical expansion rate calculated by the stream function is roughly 0.05 degrees per decade, while the Southern Hemisphere rate is 0.2 degrees per decade. Of the six metrics, only the meridional stream function and precipitation minus evaporation have substantial uncertainties. The stream function errors are large due to uncertainty in the underlying meridional wind data and the presence of large regions of near-neutral circulation at the poleward edge of the tropics. These errors have decreased in recent decades because of improvements in the assimilated observations. Despite these improvements, we recommended using the zonal surface wind zero crossing to analyze tropical extent trends in reanalyses. This is particularly important in the Northern Hemisphere, before the year 2000, and when studying individual seasons other than winter.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(2968 KB)
-
Supplement
(1843 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2968 KB) - Metadata XML
-
Supplement
(1843 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
- AC1: 'Comment on egusphere-2022-1438', Daniel Baldassare, 04 Jan 2023
- RC1: 'Comment on egusphere-2022-1438', Anonymous Referee #1, 16 Jan 2023
-
RC2: 'Comment on egusphere-2022-1438', Anonymous Referee #2, 03 Feb 2023
In this manuscript the authors attempt to estimate the observed uncertainty in recent Hadley cell extent trends/changes as seen in ERA5. For this the authors analyze the spread across ERA5 members in different Hadley cell metrics. Lastly, the authors link the uncertainty in the HC extent to the uncertainty in the magnitude of the circulation around its edge. The overall motivation for this research, as stated in the abstract and introduction, is the different expansion rates across different Hadley cell metrics, reported by previous studies. In this manuscript, however, the authors do not address this issue. In fact, the ERA5 uncertainty of each metric, is not as large as the inter-metric spread, reported before, nor as the trend it self of each metric. So I am not sure that this manuscript helps us better constrain the different Hadley cell expansion rates. Moreover, as discussed in previous studies (which the author cite), the lack of correlation between the different metrics suggests that they represent and driven by different processes, and we thus not necessarily expect to observe the same trend in each metric. Only reporting the uncertainty in the Hadley cell trends (which seem to be small, relative to the signal and the inter-metric spread), in my opinion, is not sufficient for publication. The most important conclusion here is the reduction in uncertainty across ERA5 members over the years. But this is a technical result.Below I list several more comments (major and minor):1. The introduction, in my opinion, should be broaden to give a larger context for this problem, and why its important to investigate the expansion of the circulation. The authors should discuss how/where the Hadley cell is projected to change in coming decades, the mechanisms underlying recent and future Hadley cell changes, and the impacts of such expansion.2. How much the ERA5 spread is different than large-ensemble spread, which has been documented in previous work (e.g., Grise et al 2019).3. The missing December in the beginning of ERA5 should not be a motivation to define the annual mean from March to February. This does not allow a proper comparisons to previous work. I suggest using, only for the first year, January and February, for NH winter, and DJF for other years. And use January to December as the canonical definition for the annual mean.4. Please itemize the different paragraphs in the methods section discussing each metric.5. The Hadley cell extent is usually found by doing an interpolation of the data to a finer grid; have you done the same here?6. In the normalized STD you divide the inter-member spread with interannual variability, but these two may represent different processes. I am thus worried that this metric does not represent a normalized uncertainty.7. In Sec. 3.5 the authors argue that the uncertainty in Hadley cell expansion is linked to the gradient of the streamfunction at the Hadley cell edge. However, such link is based on correlation of only eight points, where 5 of them do not follow the regression line, and show no sign of correlation. I am thus not convinced by the authors' arguments, and suggest to remove this analysis along with its discussion.Citation: https://doi.org/
10.5194/egusphere-2022-1438-RC2 -
RC3: 'Comment on egusphere-2022-1438', Anonymous Referee #3, 28 Feb 2023
Review of “Large Uncertainty in Observed Meridional Stream Function Tropical Expansion”
This paper analyzes various “tropical width” metrics from the ERA5 reanalysis ensemble members to assess the “data-driven” uncertainties in the various metrics. As a wide range of widening estimates have been published over the past decade and a half or so, it is very helpful for the community to have a better understanding of the uncertainties in these estimates. The authors use of the ERA5 ensemble members is to be commended, as this available data has heretofore been highly under-utilized. It is encouraging to see the demonstration that, in some cases, the increased observational system in recent decades has led to a smaller spread of tropical width estimates. However, this paper contains numerous over-interpretations of the results presented and fails to compare the spread due to data uncertainty with the much larger spread due to interannual variability. When placed in the proper context, the results of this paper represent an interesting finding regarding relative uncertainty in tropical width metrics and changes over time, but the results are highly oversold as currently written.
Most critically, the authors have not made the very obvious comparison between their “data-driven uncertainty” and the uncertainty one gets in the tropical width trends due simply to interannual variability. The authors do use a “normalized intermember STD” in Sect 3.3 (e.g., Fig. 4), and the limited info available from this suggests that data-related uncertainty is actually not that large relative to interannual variability. E.g., The normalized STD near the HC edges at 500 hPa in Fig. 4 are ~6%, which means that the spread in ensemble members is only 6% of the year-to-year variability in the streamfunction.
More importantly than showing this normalized STD for a field like the meridional streamfunction is showing what it means for the mean HC edge position and trends therein. From what I can glean from Figs. 1 and 2, the spread in SF edge trends is ~0.05° decade-1 for the annual mean. Comparing to studies that have looked at multiple reanalyses (e.g., Fig. 4 of Davis and Rosenlof, 2012), SF edge trends span well over 1° decade-1 among different reanalyses, and their uncertainties (for a single reanalysis, due to interannual variability) are ~0.5° decade-1. If this were the only information available, I would say it shows that data-driven uncertainty is in fact not a substantial contributor to our understanding of historical tropical widening trends. The authors need to make a much more convincing argument as to why data-related uncertainty is substantial in the contexts of all of the other uncertainties related to different reanalyses, metrics, and interannual/natural variability.
Some more minor points are as follows:
Line 15: “data-driven uncertainty” – I don’t agree that the ensemble member spread in tropical edge latitudes (or their trends) can simply be interpreted as “data-driven uncertainty” and meaningfully compared across metrics. The atmosphere may be very smooth/laminar/‘predictable’ in some places and much less so in others. For metrics based on fields that are ‘predictable’, one could easily get the same tropical width (or tropical width trend) among ensemble members even in an area with no data. Conversely, in highly chaotic/noisy/unpredictable regions, one might get a large spread in width/trend values even with relatively high quality/dense data to constrain the reanalysis. Ultimately, the only way to truly get at whether the differences in ensemble spread are “data-driven” vs. “atmosphere-driven” would be to compare the spread in an ensemble of free-running simulations to the spread in a (data-constrained) reanalysis. Without doing this, all you can say is that one metric is noisier than another. But one doesn’t even really need reanalysis ensembles to demonstrate that, it has been shown before in a number of papers that have presented tropical width trends.
Line 17-18 “to date no study has quantified and compared the uncertainty in different HC metrics”. This is simply incorrect. Numerous studies have both quantified and compared uncertainty in different HC metrics.
Line 79: “several ensemble members” Just state how many there are.
Lines 105 - 106: Please provide a reference describing the ensemble data assimilation.
Lines 106 – 108: Please provide more details on the differences between the full ERA5 reanalysis and the ensemble version, as this is highly relevant to the interpretation of all of the results in this paper. For example, do both versions assimilate the same data? How different are the resolutions? Also, as all previous studies have used the full ERA5 reanalysis, it is critically important to also include the metrics and trends here for reference. Without doing this, we don’t know whether there is some kind of bias in the lower resolution ensemble version of the reanalysis, or whether to expect that the results in this paper actually apply to the full reanalysis.
Line 110 – 112: Given that ERA5 resolution is ~30km (I’m not sure what the ensemble version is), reducing the data to 1 degree for your analysis seems like a really bad choice. Especially when you are trying to infer extremely small changes ~ 0.1 degree per decade. Some consideration needs to be paid to whether or not this degradation of the reanalysis resolution impacts your analysis.
Line 111: “conservational” -> conservative
Line 119: Please provide link/reference/version information for the python TropD code you are using.
Line 126: “… poleward of the minimum … equatorward of the maximum … “ this is correct for the NH but not the SH edge. Please clarify or make the language more general to apply to both hemispheres.
Line 134: I believe Solomon et al. 2016 was the first to show this, and should be cited here in addition to Davis and Birner.
Solomon, A, L M Polvani, D W Waugh, and S M Davis. “Contrasting Upper and Lower Atmospheric Metrics of Tropical Expansion in the Southern Hemisphere.” Geophysical Research 2016. https://doi.org/10.1002/2016GL070917.
Line 143-144: “[PSL is] …poorly correlated to P-E”. From Waugh et al. 2018, the correlation between PSL (they call it SLP) and P-E is 0.32 in the NH and 0.69 in the SH. I would not call this correlation poor in the SH. Also, PSL is correlated with UAS in the SH with 0.98, so they are virtually interchangeable with one another. The authors should do a better job of making this distinction.
Line 153: “intermember average” (and many other points in the paper) - It is much more common to refer to this type of average as an ensemble average. I suggest changing this terminology throughout the paper.
Line 165-166 and Figure 1: I don’t understand the need to do some kind of fit to 9 data points here, why not just show a simple histogram of the data? If the authors have a compelling reason to use these ‘kernels’, they should explain clearly, and also briefly explain Scott’s rule.
Line 177 (and possibly elsewhere in paper): “low” -> “small” The word low is appropriate for heights, not quantities.
Lines 176-179: This sentence doesn’t make sense. The authors state that the trends are a substantial downward revision but then that they are similar to Grise et al. It’s not clear what the authors are trying to say here.
Line 189-190: This goes to my point about “data-driven uncertainty” being a misnomer for describing the spread among reanalysis ensemble members. The authors state that over the SH, SF-based expansion is more robust (i.e., less uncertain). However, we know that reanalyses are less constrained by data in the SH. The fact that the “data-driven uncertainty” is smaller in the SH suggests that other factors are at play such as a stronger forced signal (e.g., due to ozone depletion) or less noise.
Figure 3: It is very hard to tell the difference in color between the UAS and PSL curves. Please consider using different colors.
Line 226: Are the reanalysis ensemble members really created by perturbing the model parameters?! Maybe the authors mean initial conditions here and not model physics, which is what I think of when I hear the word “model parameters”. This is really important information for the reader to understand, as it has direct bearing on how the spread of the ensemble members is interpreted.
Fig 6b: Is “SF uncertainty” just the ensemble STD of the SF in the vicinity of the edge (+- 2 deg) at 500 hPa? Or is it the ensemble average of the standard deviation within +-2 deg of the edge computed from each member? The caption doesn’t specifically state what the statistic is, so this discussion is hard to evaluate.
Citation: https://doi.org/10.5194/egusphere-2022-1438-RC3 -
AC2: 'Comment on egusphere-2022-1438', Daniel Baldassare, 08 Mar 2023
We thank the reviewers for their comments and suggestions. Our study presents new and important insights into the problem of Hadley cell expansion. Firstly, we present the first estimates of tropical expansion across multiple metrics using the ERA5 reanalysis. Second, our study investigates the data uncertainty of the tropical width; while previous studies only examined the spread between different reanalysis, which is larger due to differences in the assimilation scheme and assimilated observations. Third, we show that the uncertainty in the streamfunction-defined tropical width is large, particularly in the Northern hemisphere in summer and fall due to a weak streamfunction gradient. Fourth, we combine our results with previous studies to provide information about the issues with certain metrics.
We believe that we can respond to the comments from the reviewers, resulting in an improved manuscript. One issue which was apparent in all three reviews was the comparison of the ERA5 ensemble spread to the inter-reanalysis spread. We agree that the ERA5 ensemble was inadequately described in this manuscript, resulting in confusion about the significance of the results. We plan to add more information about how the ERA5 ensemble members are produced and how this results in a smaller spread than the inter-reanalysis spread. We also will focus more on the relative uncertainty between metrics, which is as large as two orders of magnitude, and provides useful insights about the reliability of different metrics. Another issue brought up by multiple reviewers is the appropriateness of the recommendations in the conclusion. After reading the comments, we agree that ranking metrics based on reanalysis data uncertainty is not appropriate, and intend to present the results along with previous studies rather than ranking metrics.
Citation: https://doi.org/10.5194/egusphere-2022-1438-AC2
Interactive discussion
Status: closed
- AC1: 'Comment on egusphere-2022-1438', Daniel Baldassare, 04 Jan 2023
- RC1: 'Comment on egusphere-2022-1438', Anonymous Referee #1, 16 Jan 2023
-
RC2: 'Comment on egusphere-2022-1438', Anonymous Referee #2, 03 Feb 2023
In this manuscript the authors attempt to estimate the observed uncertainty in recent Hadley cell extent trends/changes as seen in ERA5. For this the authors analyze the spread across ERA5 members in different Hadley cell metrics. Lastly, the authors link the uncertainty in the HC extent to the uncertainty in the magnitude of the circulation around its edge. The overall motivation for this research, as stated in the abstract and introduction, is the different expansion rates across different Hadley cell metrics, reported by previous studies. In this manuscript, however, the authors do not address this issue. In fact, the ERA5 uncertainty of each metric, is not as large as the inter-metric spread, reported before, nor as the trend it self of each metric. So I am not sure that this manuscript helps us better constrain the different Hadley cell expansion rates. Moreover, as discussed in previous studies (which the author cite), the lack of correlation between the different metrics suggests that they represent and driven by different processes, and we thus not necessarily expect to observe the same trend in each metric. Only reporting the uncertainty in the Hadley cell trends (which seem to be small, relative to the signal and the inter-metric spread), in my opinion, is not sufficient for publication. The most important conclusion here is the reduction in uncertainty across ERA5 members over the years. But this is a technical result.Below I list several more comments (major and minor):1. The introduction, in my opinion, should be broaden to give a larger context for this problem, and why its important to investigate the expansion of the circulation. The authors should discuss how/where the Hadley cell is projected to change in coming decades, the mechanisms underlying recent and future Hadley cell changes, and the impacts of such expansion.2. How much the ERA5 spread is different than large-ensemble spread, which has been documented in previous work (e.g., Grise et al 2019).3. The missing December in the beginning of ERA5 should not be a motivation to define the annual mean from March to February. This does not allow a proper comparisons to previous work. I suggest using, only for the first year, January and February, for NH winter, and DJF for other years. And use January to December as the canonical definition for the annual mean.4. Please itemize the different paragraphs in the methods section discussing each metric.5. The Hadley cell extent is usually found by doing an interpolation of the data to a finer grid; have you done the same here?6. In the normalized STD you divide the inter-member spread with interannual variability, but these two may represent different processes. I am thus worried that this metric does not represent a normalized uncertainty.7. In Sec. 3.5 the authors argue that the uncertainty in Hadley cell expansion is linked to the gradient of the streamfunction at the Hadley cell edge. However, such link is based on correlation of only eight points, where 5 of them do not follow the regression line, and show no sign of correlation. I am thus not convinced by the authors' arguments, and suggest to remove this analysis along with its discussion.Citation: https://doi.org/
10.5194/egusphere-2022-1438-RC2 -
RC3: 'Comment on egusphere-2022-1438', Anonymous Referee #3, 28 Feb 2023
Review of “Large Uncertainty in Observed Meridional Stream Function Tropical Expansion”
This paper analyzes various “tropical width” metrics from the ERA5 reanalysis ensemble members to assess the “data-driven” uncertainties in the various metrics. As a wide range of widening estimates have been published over the past decade and a half or so, it is very helpful for the community to have a better understanding of the uncertainties in these estimates. The authors use of the ERA5 ensemble members is to be commended, as this available data has heretofore been highly under-utilized. It is encouraging to see the demonstration that, in some cases, the increased observational system in recent decades has led to a smaller spread of tropical width estimates. However, this paper contains numerous over-interpretations of the results presented and fails to compare the spread due to data uncertainty with the much larger spread due to interannual variability. When placed in the proper context, the results of this paper represent an interesting finding regarding relative uncertainty in tropical width metrics and changes over time, but the results are highly oversold as currently written.
Most critically, the authors have not made the very obvious comparison between their “data-driven uncertainty” and the uncertainty one gets in the tropical width trends due simply to interannual variability. The authors do use a “normalized intermember STD” in Sect 3.3 (e.g., Fig. 4), and the limited info available from this suggests that data-related uncertainty is actually not that large relative to interannual variability. E.g., The normalized STD near the HC edges at 500 hPa in Fig. 4 are ~6%, which means that the spread in ensemble members is only 6% of the year-to-year variability in the streamfunction.
More importantly than showing this normalized STD for a field like the meridional streamfunction is showing what it means for the mean HC edge position and trends therein. From what I can glean from Figs. 1 and 2, the spread in SF edge trends is ~0.05° decade-1 for the annual mean. Comparing to studies that have looked at multiple reanalyses (e.g., Fig. 4 of Davis and Rosenlof, 2012), SF edge trends span well over 1° decade-1 among different reanalyses, and their uncertainties (for a single reanalysis, due to interannual variability) are ~0.5° decade-1. If this were the only information available, I would say it shows that data-driven uncertainty is in fact not a substantial contributor to our understanding of historical tropical widening trends. The authors need to make a much more convincing argument as to why data-related uncertainty is substantial in the contexts of all of the other uncertainties related to different reanalyses, metrics, and interannual/natural variability.
Some more minor points are as follows:
Line 15: “data-driven uncertainty” – I don’t agree that the ensemble member spread in tropical edge latitudes (or their trends) can simply be interpreted as “data-driven uncertainty” and meaningfully compared across metrics. The atmosphere may be very smooth/laminar/‘predictable’ in some places and much less so in others. For metrics based on fields that are ‘predictable’, one could easily get the same tropical width (or tropical width trend) among ensemble members even in an area with no data. Conversely, in highly chaotic/noisy/unpredictable regions, one might get a large spread in width/trend values even with relatively high quality/dense data to constrain the reanalysis. Ultimately, the only way to truly get at whether the differences in ensemble spread are “data-driven” vs. “atmosphere-driven” would be to compare the spread in an ensemble of free-running simulations to the spread in a (data-constrained) reanalysis. Without doing this, all you can say is that one metric is noisier than another. But one doesn’t even really need reanalysis ensembles to demonstrate that, it has been shown before in a number of papers that have presented tropical width trends.
Line 17-18 “to date no study has quantified and compared the uncertainty in different HC metrics”. This is simply incorrect. Numerous studies have both quantified and compared uncertainty in different HC metrics.
Line 79: “several ensemble members” Just state how many there are.
Lines 105 - 106: Please provide a reference describing the ensemble data assimilation.
Lines 106 – 108: Please provide more details on the differences between the full ERA5 reanalysis and the ensemble version, as this is highly relevant to the interpretation of all of the results in this paper. For example, do both versions assimilate the same data? How different are the resolutions? Also, as all previous studies have used the full ERA5 reanalysis, it is critically important to also include the metrics and trends here for reference. Without doing this, we don’t know whether there is some kind of bias in the lower resolution ensemble version of the reanalysis, or whether to expect that the results in this paper actually apply to the full reanalysis.
Line 110 – 112: Given that ERA5 resolution is ~30km (I’m not sure what the ensemble version is), reducing the data to 1 degree for your analysis seems like a really bad choice. Especially when you are trying to infer extremely small changes ~ 0.1 degree per decade. Some consideration needs to be paid to whether or not this degradation of the reanalysis resolution impacts your analysis.
Line 111: “conservational” -> conservative
Line 119: Please provide link/reference/version information for the python TropD code you are using.
Line 126: “… poleward of the minimum … equatorward of the maximum … “ this is correct for the NH but not the SH edge. Please clarify or make the language more general to apply to both hemispheres.
Line 134: I believe Solomon et al. 2016 was the first to show this, and should be cited here in addition to Davis and Birner.
Solomon, A, L M Polvani, D W Waugh, and S M Davis. “Contrasting Upper and Lower Atmospheric Metrics of Tropical Expansion in the Southern Hemisphere.” Geophysical Research 2016. https://doi.org/10.1002/2016GL070917.
Line 143-144: “[PSL is] …poorly correlated to P-E”. From Waugh et al. 2018, the correlation between PSL (they call it SLP) and P-E is 0.32 in the NH and 0.69 in the SH. I would not call this correlation poor in the SH. Also, PSL is correlated with UAS in the SH with 0.98, so they are virtually interchangeable with one another. The authors should do a better job of making this distinction.
Line 153: “intermember average” (and many other points in the paper) - It is much more common to refer to this type of average as an ensemble average. I suggest changing this terminology throughout the paper.
Line 165-166 and Figure 1: I don’t understand the need to do some kind of fit to 9 data points here, why not just show a simple histogram of the data? If the authors have a compelling reason to use these ‘kernels’, they should explain clearly, and also briefly explain Scott’s rule.
Line 177 (and possibly elsewhere in paper): “low” -> “small” The word low is appropriate for heights, not quantities.
Lines 176-179: This sentence doesn’t make sense. The authors state that the trends are a substantial downward revision but then that they are similar to Grise et al. It’s not clear what the authors are trying to say here.
Line 189-190: This goes to my point about “data-driven uncertainty” being a misnomer for describing the spread among reanalysis ensemble members. The authors state that over the SH, SF-based expansion is more robust (i.e., less uncertain). However, we know that reanalyses are less constrained by data in the SH. The fact that the “data-driven uncertainty” is smaller in the SH suggests that other factors are at play such as a stronger forced signal (e.g., due to ozone depletion) or less noise.
Figure 3: It is very hard to tell the difference in color between the UAS and PSL curves. Please consider using different colors.
Line 226: Are the reanalysis ensemble members really created by perturbing the model parameters?! Maybe the authors mean initial conditions here and not model physics, which is what I think of when I hear the word “model parameters”. This is really important information for the reader to understand, as it has direct bearing on how the spread of the ensemble members is interpreted.
Fig 6b: Is “SF uncertainty” just the ensemble STD of the SF in the vicinity of the edge (+- 2 deg) at 500 hPa? Or is it the ensemble average of the standard deviation within +-2 deg of the edge computed from each member? The caption doesn’t specifically state what the statistic is, so this discussion is hard to evaluate.
Citation: https://doi.org/10.5194/egusphere-2022-1438-RC3 -
AC2: 'Comment on egusphere-2022-1438', Daniel Baldassare, 08 Mar 2023
We thank the reviewers for their comments and suggestions. Our study presents new and important insights into the problem of Hadley cell expansion. Firstly, we present the first estimates of tropical expansion across multiple metrics using the ERA5 reanalysis. Second, our study investigates the data uncertainty of the tropical width; while previous studies only examined the spread between different reanalysis, which is larger due to differences in the assimilation scheme and assimilated observations. Third, we show that the uncertainty in the streamfunction-defined tropical width is large, particularly in the Northern hemisphere in summer and fall due to a weak streamfunction gradient. Fourth, we combine our results with previous studies to provide information about the issues with certain metrics.
We believe that we can respond to the comments from the reviewers, resulting in an improved manuscript. One issue which was apparent in all three reviews was the comparison of the ERA5 ensemble spread to the inter-reanalysis spread. We agree that the ERA5 ensemble was inadequately described in this manuscript, resulting in confusion about the significance of the results. We plan to add more information about how the ERA5 ensemble members are produced and how this results in a smaller spread than the inter-reanalysis spread. We also will focus more on the relative uncertainty between metrics, which is as large as two orders of magnitude, and provides useful insights about the reliability of different metrics. Another issue brought up by multiple reviewers is the appropriateness of the recommendations in the conclusion. After reading the comments, we agree that ranking metrics based on reanalysis data uncertainty is not appropriate, and intend to present the results along with previous studies rather than ranking metrics.
Citation: https://doi.org/10.5194/egusphere-2022-1438-AC2
Peer review completion
Journal article(s) based on this preprint
Data sets
ERA5 Accessed from Copernicus Hersbach et al., 2020 https://cds.climate.copernicus.eu
Model code and software
ERA5 Analysis Python Code Daniel Baldassare https://doi.org/10.5281/zenodo.7430530
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
268 | 86 | 20 | 374 | 41 | 3 | 6 |
- HTML: 268
- PDF: 86
- XML: 20
- Total: 374
- Supplement: 41
- BibTeX: 3
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Daniel Baldassare
Thomas Reichler
Piret Plink-Björklund
Jacob Slawson
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2968 KB) - Metadata XML
-
Supplement
(1843 KB) - BibTeX
- EndNote
- Final revised paper