the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
jsmetrics v0.1.1: a Python package for metrics and algorithms used to identify or characterise atmospheric jet-streams
Abstract. The underlying dynamics controlling this planet’s jet streams are complex, but it is expected that they will have an observable response to changes in the larger climatic system. A growing divergence in regional surface warming trends across the planet, which has been both observed and projected since the start of the 20th century, has likely altered the thermodynamic relationships responsible for jet stream formation and control. Despite this, the exact movements and trends in the changes to the jet streams generally remain unclear and without consensus in the literature. The latest IPCC report highlighted that trends both within and between a variety of observational and modelling studies were inconsistent (Gulev et al., 2021; Lee et al., 2021). Trends in the jet streams were associated with low to medium confidence, especially in the Northern Hemisphere.
However, what is often overlooked in evaluating these trends is the confused message in the literature around how to first identify, and then characterise, the jet streams themselves. For characterisation, approaches have included isolating the latitude of the maximum wind speed, using sinuosity metrics to distinguish jet ‘waviness’, and using algorithms to identify jet cores or jet centres. Each of these highlights or reduces certain aspects of jet streams, exist within given time windows, and characterise the jet within a given (Eulerian or Lagrangian) context. While each approach can capture particular characteristics and changes, they are subject to the spatial and temporal specifications of their definition. There is therefore value in using them in combination, to assess parametric and structural uncertainty, and to carry out sensitivity analysis.
Here, we describe jsmetrics v0.1.1, a new open-source Python 3 module with standardised versions of 16 metrics that have been used for jet stream characterisation. We demonstrate the application of this library with two case studies derived from ERA-5 climate reanalysis data.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(6458 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(6458 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-661', Anonymous Referee #1, 17 Jul 2023
"jsmetrics v0.1.1: a Python package for metrics and algorithms used to identify or characterise atmospheric jet-streams" outlines a new software package developed for the testing and comparison of different methodologies for describing jet streams and their characteristics. It briefly demonstrates how the package might be used to quantify different types of uncertainty regarding these methodologies, as well as detailing the process for continuing to add other methodologies into the package. By using the package, the authors demonstrate that there is considerable variation in the latitude of the Northern Hemisphere wintertime jet stream as identified by the various methods, which nicely supports the authors' argument for the need for this software.
Overall, the manuscript is close to being in a publishable form. It is well written, and the software presented appears to address a legitimate need in the larger community. I especially appreciate that they lay out a process for the expansion of the software into a tool that might really serve the community well.
I have mostly minor concerns, the largest of which is the lack of demonstration of a nontrivial element of the software. The authors do not give any motivation for this omission, although it could be justified. Still, it left this reviewer with the impression that either the manuscript or the package was a bit incomplete without some acknowledgement or demonstration of the full capabilities of the software. See more detailed comments about this below.
Another larger concern is the authors do not provide a rationale for which packages were chosen for the initial release. This is easily remedied, but it does make evaluation of the completeness of the package difficult. At present, there are several metrics broadly used by the community that are left in the "Future Work" section (Table 5) which might greatly increase the usefulness and adoption of this software. A more thorough justification of which metrics were implemented before first release (which, by extension, helps explain why some were not) would alleviate these concerns.
Finally, a suggestion for the authors, which they may freely reject, is to consider more quantitative ways of evaluating the discrepancies between metrics. Given the present manuscript generally only compares broad patterns/trends seemingly drawn from visual inspection of the figures, there is not much that can be robustly concluded from the comparisons done here. I recognise that the purpose of the manuscript is more about the software than its application, but perhaps a more robust and consequential application would make the software more attractive to potential users.
One suggestion in this vein is to compare the results of a metric against quantifiable parts of its definition to demonstrate how the definition of a metric may explain discrepancies across metrics. This is an important part of identifying structural uncertainty -- identifying its primary source. The manuscript at present raises this question and posits some answers, while some more rigorous answer of the question would be possible with minimal additional analysis. I think this would be a nice addition to the manuscript, but the authors may legitimately determine it is out of scope. If there are concurrent efforts to do this kind of extension as a separate work, perhaps they could be detailed more in the conclusions.
Other smaller suggestions follow. Most of these are simply aimed at improving the clarity and accuracy of the manuscript and could be addressed with relative ease.
Line 11: "Each of these…" subject-verb agreement
Ine 43: Overall, very nice introduction.
Line 50: Perhaps some mention should be made that these metrics focus entirely on tropospheric jet streams
Line 53: While generally true for the existence, not always the case for variability/forced response. See Menzel et al. 2019.
Line 63: Suggest rephrasing.
Line 65: Suggest rephrasing to improve clarity and specificity. Something like "Understanding how jet streams operate between seasons, between phases in climate oscillations, and in response to human activities could enable projections about the regimes of (extreme) surface weather across timescales."
Line 73: One general comment - it would be helpful to get some more justification from the authors about the specific methods used in the initial release. What was the motivation for including the methods that are here and not some of the ones mentioned which are planned for future implementation? Whether methods were chosen based on an assessment of their relative importance or relative ease of implementation, or some combination of these or other factors would be helpful to be aware of. In particular, there are some important but more complex methods which are not yet implemented (e.g., jet latitude index, local wave activity etc.) and the reader is left to assume about why they were not implemented in the initial release. There is a clearly a lot of resources invested into the metrics included, and some discussion of the strategy for choosing this would be warranted
Line 81: suggested rephrase: "approaching understanding" -> "understanding"
Line 90: suggested rephrase: "However, by isolating lower-level winds, these methods may miss aspects of jet streams whose eddy-driven components do not extend throughout the atmospheric column within the method's given time window."
Line 101: suggested rephrase: "so they can be used"
Line 110: suggested rephrase: "so they are less appropriate"
Line 124: It may be worth noting that use of the meridional circulation index itself is highly debated (see Barnes 2013). Also consider referencing Blackport and Screen 2020, Blackport et al. 2019 to capture both sides of the debate
Line 125: suggest rephrasing (run-on)
Line 135: suggested rephrase: "they provide relatively more detail"
Line 142: suggested rephrase: "these methods can discount the influence of multiple jet streams…"
Line 147: Perhaps another limitation of threshold-based metrics is that they may not operate properly in climate regimes very different from the present (e.g., RCP8.5)
Line 170: Perhaps xarray's interoperability with netCDF and GRIB data formats could be advertised
Line 180: In general I think this design choice is a good one, but it does come with trade-offs which might be acknowledged/mitigated. One concern with this design choice is the atomization of code which can make maintenance and readability more challenging as it requires a maintainer or reviewer to search through many potential code paths. A small example from the codebase. Jsmetrics.metrics.jet_statistics_components.get_atm_mass_at_one_hPa is essentially the same function as calc_atmospheric_mass_at_kPa but does a basic unit conversion prior to the calculation (i.e., divides by 10). The granularity of the component functions has some trade-off with the eventual or actual reusability, and how flexibility in the codebase is implemented is something to keep in mind or even potentially discuss further at this junction.
Line 207: Great to have this process outlined
Line 255: This explanation requires some clarification. Based on Table 1, the Barnes and Simpson method is the only one that is not using an interpolation to a higher resolution. The others are using parabolic/quadratic/cubic interpolations. Yet the way this sentence currently reads, it sounds as if Barnes and Simpson's interpolation scheme is somehow what makes it different from the others. If this is not intended, please clarify. If this is the explanation, it is unclear to me how the quadratic interpolation scheme is different enough from the others to cause such a discrepancy. Also I think Table 1 should then include some mention of the interpolation scheme if it is significant enough to cause such variation between metrics. It seems to me, based on Table 1, an alternative explanation might be the altitude utilized for jet identification. Figure 5 backs up this interpretation - when Barnes and Simpson is run on the same levels as the others, it perform much more consistently with the other metrics. This is discussed in the paragraph following the explanation, but it is currently not clear how these two explanations are connected, or it seems as if the difference in pressure levels is a second-order explanation rather than first-order.
I think a lot of my confusion here could be cleared up by more careful language enumerating exactly which metrics are being discussed. If the constant referencing of different metrics becomes cumbersome, you may consider abbreviating each metric name in Table 1. Maybe something like BP15, BP13, GP17, etc. I leave that decision up to the authors, but I think having a small, concrete label for each algorithm might help with clarity overall.
Line 260: However, there is generally good agreement between Bracegirdle 2018 and Barnes and Polvani 2013, one of which uses a single level and the other does not, so I'm not sure this blanket explanation works for all algorithms. I think focusing the explanation on the largest discrepancies is best. Alternatively, consider plotting statistics as a function of the input parameters, such as variance as a function of altitude or resolution. This would then very clearly support any explanations regarding discrepancies between metrics.
Line 299: I notice there is not currently a demonstration of the jet waviness metrics, which makes me wonder a bit about their inclusion into the software. I recognize there are only two at present, which make comparison difficult, and these are likely the more complex algorithms to implement of all of the included metrics. Still, perhaps some application to the cold wave event in 4.2 which features a prominent trough could at least display jsmetrics capabilities. Otherwise, it feels like they should be left in the future work column if they are not complete enough to get even some (possibly trivial) scientific result from. My opinion is a bit mixed here, as in general I don't want to suggest much additional analysis, but I think doing a bit more with these is worth the authors' consideration.
Line 327: The Figure caption notes that each time window is centred on the same time, but it might be nice to include a description of the temporal window selection in the main text as well.
Figure 7/8: One consideration which should be made here is the choice to use a contour plot for what is essentially categorical data (jet detected/not detected). Contour plots will interpolate between grid points in ways that may distort the underlying data when it is discrete instead of continuous. While the authors may find compelling reasons to keep the figures as they are, please at least consider using a grid-coloring map (i.e., pcolormesh) instead of a contour plot to avoid generating details which may not be present in the data. An exception here may be the 8-day counts in Figure 8. I'm not familiar enough with the post-processing technique used here to know whether the counts should be discrete or continuous, although I would generally expect counts to be discrete.
Line 347: Does the estimation of latitude from jet core algorithms use any interpolation to higher resolution as is common with the jet statistic algorithms? I think whether this is done on native resolution or a higher one should be explicitly stated; it may play some part in their overall discrepancy.
Figure 9: The text states (349) that Barnes and Polvani 2015 produces a single estimate, however it appears here to have a PDF. There are no colors corresponding to Barnes and Simpson or Bracegirdle; I suspect that their colors appear together as the grey vertical line. However, Barnes and Simpson should have a PDF, so I think there is an issue with the legend here. And if two metrics do fall at exactly the same location, please come up with a better way to distinguish them or perhaps make a note in the text, it is somewhat confusing as currently displayed.
Line 361: is this analysis runner available to package downloaders? I could not find it in the repo. If not, why not provide some helpful scripts, perhaps pared down versions of the ones used for this work?
Citation: https://doi.org/10.5194/egusphere-2023-661-RC1 -
AC2: 'Reply on RC1', Tom Keel, 18 Sep 2023
Thank you very much for your review and suggestions. It is clear that you have invested a lot of time, and we greatly appreciate that.
We are glad to hear that you feel the manuscript is in a near publishable format, but we understand the minor concerns you raise. Firstly, we accept your largest concern with the manuscript – a demonstration of a non-trivial use of the software. We have prepared a manuscript using jsmetircs on ScenarioMIP to look at uncertainty. Your suggestion of looking at the sensitivity of a particular metric to its parameters is an intriguing one that we shall explore during the process of curating revisions. Since your review, we have also started work on improving the documentation of the package, including broader demonstrations of the package.
Regarding your second concern, we will improve on our rationale behind the inclusion of the metrics currently in the package, as you suggest. We plan to expand Tables 1, 2, 3 & 5 by adding a column containing a count of each metric’s use in subsequent literature.
Finally, we shall include a more quantitative way of evaluating the discrepancies between metrics. We believe we can achieve this with some minor changes to the information and values given in the text, and some small extensions to the figures.
Thank you for your further comments, and we intend to address all of them.
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC2
-
AC2: 'Reply on RC1', Tom Keel, 18 Sep 2023
-
RC2: 'Comment on egusphere-2023-661', Gloria Manney, 04 Aug 2023
-
AC1: 'Reply on RC2', Tom Keel, 16 Aug 2023
Dear Gloria,
Thank you very much for your review, it is clear that you have invested a lot of time investigating the package, and we are grateful you have taken the time to do this.
You mentioned that our implementation of M11 in this package may not be the same as yours. If possible, we would be very grateful if you could provide us the code for JETPAC, so we can check that we have done it properly?
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC1 -
AC3: 'Reply on RC2', Tom Keel, 18 Sep 2023
Dear Gloria,
Thank you again for your extensive review, we are very grateful for your comments, and we hope to have addressed some of your more major concerns in this interim period.
Firstly, we agree with your primary concern about the maturity of the package and its documentation at the time of your review. We recognise that neither the usage guidelines nor the documentation provided were sufficient. During the interactive discussion, we have made significant progress offline to expand and improve the online and in-code documentation and use cases. We were also able to identify faults with the M11 algorithm that you highlight, and we hope to continue a dialogue with you to work on and verify its implementation.
In improving the documentation provided for this package, we also hope to have alleviated some of your broader concern for the implementation and interpretation of the methods. We now include detail descriptions of the methods and examples of their use within their docstring. And have also added a notes section to each method which detail more specific implementation details.
Regarding your concern for metric validation, we have been able to clean up and verify 12 of the 17 included in the package as of now. For the remaining, we will take your advice to reach out to other authors whose metrics we felt needed some clarification during the revisions process.
Finally, regarding your concern about how we distinguish between ‘jet statistics’ and ‘jet core algorithms’, we would like to highlight that this decision was made due to the dimensionality of the outputs. Whereas the jet statistics return a single value, the jet core algorithms work to provide a multidimensional mask of the outputs. Our motivation was that this would allow the user to use the mask provided by a given jet core algorithm to extract other quantities (winds, altitude, etc.), and we have now provided examples in the documentation of exactly this.
Thank you for your further comments that you provide in your review, and we intend to address all of them.
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC3
-
AC1: 'Reply on RC2', Tom Keel, 16 Aug 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-661', Anonymous Referee #1, 17 Jul 2023
"jsmetrics v0.1.1: a Python package for metrics and algorithms used to identify or characterise atmospheric jet-streams" outlines a new software package developed for the testing and comparison of different methodologies for describing jet streams and their characteristics. It briefly demonstrates how the package might be used to quantify different types of uncertainty regarding these methodologies, as well as detailing the process for continuing to add other methodologies into the package. By using the package, the authors demonstrate that there is considerable variation in the latitude of the Northern Hemisphere wintertime jet stream as identified by the various methods, which nicely supports the authors' argument for the need for this software.
Overall, the manuscript is close to being in a publishable form. It is well written, and the software presented appears to address a legitimate need in the larger community. I especially appreciate that they lay out a process for the expansion of the software into a tool that might really serve the community well.
I have mostly minor concerns, the largest of which is the lack of demonstration of a nontrivial element of the software. The authors do not give any motivation for this omission, although it could be justified. Still, it left this reviewer with the impression that either the manuscript or the package was a bit incomplete without some acknowledgement or demonstration of the full capabilities of the software. See more detailed comments about this below.
Another larger concern is the authors do not provide a rationale for which packages were chosen for the initial release. This is easily remedied, but it does make evaluation of the completeness of the package difficult. At present, there are several metrics broadly used by the community that are left in the "Future Work" section (Table 5) which might greatly increase the usefulness and adoption of this software. A more thorough justification of which metrics were implemented before first release (which, by extension, helps explain why some were not) would alleviate these concerns.
Finally, a suggestion for the authors, which they may freely reject, is to consider more quantitative ways of evaluating the discrepancies between metrics. Given the present manuscript generally only compares broad patterns/trends seemingly drawn from visual inspection of the figures, there is not much that can be robustly concluded from the comparisons done here. I recognise that the purpose of the manuscript is more about the software than its application, but perhaps a more robust and consequential application would make the software more attractive to potential users.
One suggestion in this vein is to compare the results of a metric against quantifiable parts of its definition to demonstrate how the definition of a metric may explain discrepancies across metrics. This is an important part of identifying structural uncertainty -- identifying its primary source. The manuscript at present raises this question and posits some answers, while some more rigorous answer of the question would be possible with minimal additional analysis. I think this would be a nice addition to the manuscript, but the authors may legitimately determine it is out of scope. If there are concurrent efforts to do this kind of extension as a separate work, perhaps they could be detailed more in the conclusions.
Other smaller suggestions follow. Most of these are simply aimed at improving the clarity and accuracy of the manuscript and could be addressed with relative ease.
Line 11: "Each of these…" subject-verb agreement
Ine 43: Overall, very nice introduction.
Line 50: Perhaps some mention should be made that these metrics focus entirely on tropospheric jet streams
Line 53: While generally true for the existence, not always the case for variability/forced response. See Menzel et al. 2019.
Line 63: Suggest rephrasing.
Line 65: Suggest rephrasing to improve clarity and specificity. Something like "Understanding how jet streams operate between seasons, between phases in climate oscillations, and in response to human activities could enable projections about the regimes of (extreme) surface weather across timescales."
Line 73: One general comment - it would be helpful to get some more justification from the authors about the specific methods used in the initial release. What was the motivation for including the methods that are here and not some of the ones mentioned which are planned for future implementation? Whether methods were chosen based on an assessment of their relative importance or relative ease of implementation, or some combination of these or other factors would be helpful to be aware of. In particular, there are some important but more complex methods which are not yet implemented (e.g., jet latitude index, local wave activity etc.) and the reader is left to assume about why they were not implemented in the initial release. There is a clearly a lot of resources invested into the metrics included, and some discussion of the strategy for choosing this would be warranted
Line 81: suggested rephrase: "approaching understanding" -> "understanding"
Line 90: suggested rephrase: "However, by isolating lower-level winds, these methods may miss aspects of jet streams whose eddy-driven components do not extend throughout the atmospheric column within the method's given time window."
Line 101: suggested rephrase: "so they can be used"
Line 110: suggested rephrase: "so they are less appropriate"
Line 124: It may be worth noting that use of the meridional circulation index itself is highly debated (see Barnes 2013). Also consider referencing Blackport and Screen 2020, Blackport et al. 2019 to capture both sides of the debate
Line 125: suggest rephrasing (run-on)
Line 135: suggested rephrase: "they provide relatively more detail"
Line 142: suggested rephrase: "these methods can discount the influence of multiple jet streams…"
Line 147: Perhaps another limitation of threshold-based metrics is that they may not operate properly in climate regimes very different from the present (e.g., RCP8.5)
Line 170: Perhaps xarray's interoperability with netCDF and GRIB data formats could be advertised
Line 180: In general I think this design choice is a good one, but it does come with trade-offs which might be acknowledged/mitigated. One concern with this design choice is the atomization of code which can make maintenance and readability more challenging as it requires a maintainer or reviewer to search through many potential code paths. A small example from the codebase. Jsmetrics.metrics.jet_statistics_components.get_atm_mass_at_one_hPa is essentially the same function as calc_atmospheric_mass_at_kPa but does a basic unit conversion prior to the calculation (i.e., divides by 10). The granularity of the component functions has some trade-off with the eventual or actual reusability, and how flexibility in the codebase is implemented is something to keep in mind or even potentially discuss further at this junction.
Line 207: Great to have this process outlined
Line 255: This explanation requires some clarification. Based on Table 1, the Barnes and Simpson method is the only one that is not using an interpolation to a higher resolution. The others are using parabolic/quadratic/cubic interpolations. Yet the way this sentence currently reads, it sounds as if Barnes and Simpson's interpolation scheme is somehow what makes it different from the others. If this is not intended, please clarify. If this is the explanation, it is unclear to me how the quadratic interpolation scheme is different enough from the others to cause such a discrepancy. Also I think Table 1 should then include some mention of the interpolation scheme if it is significant enough to cause such variation between metrics. It seems to me, based on Table 1, an alternative explanation might be the altitude utilized for jet identification. Figure 5 backs up this interpretation - when Barnes and Simpson is run on the same levels as the others, it perform much more consistently with the other metrics. This is discussed in the paragraph following the explanation, but it is currently not clear how these two explanations are connected, or it seems as if the difference in pressure levels is a second-order explanation rather than first-order.
I think a lot of my confusion here could be cleared up by more careful language enumerating exactly which metrics are being discussed. If the constant referencing of different metrics becomes cumbersome, you may consider abbreviating each metric name in Table 1. Maybe something like BP15, BP13, GP17, etc. I leave that decision up to the authors, but I think having a small, concrete label for each algorithm might help with clarity overall.
Line 260: However, there is generally good agreement between Bracegirdle 2018 and Barnes and Polvani 2013, one of which uses a single level and the other does not, so I'm not sure this blanket explanation works for all algorithms. I think focusing the explanation on the largest discrepancies is best. Alternatively, consider plotting statistics as a function of the input parameters, such as variance as a function of altitude or resolution. This would then very clearly support any explanations regarding discrepancies between metrics.
Line 299: I notice there is not currently a demonstration of the jet waviness metrics, which makes me wonder a bit about their inclusion into the software. I recognize there are only two at present, which make comparison difficult, and these are likely the more complex algorithms to implement of all of the included metrics. Still, perhaps some application to the cold wave event in 4.2 which features a prominent trough could at least display jsmetrics capabilities. Otherwise, it feels like they should be left in the future work column if they are not complete enough to get even some (possibly trivial) scientific result from. My opinion is a bit mixed here, as in general I don't want to suggest much additional analysis, but I think doing a bit more with these is worth the authors' consideration.
Line 327: The Figure caption notes that each time window is centred on the same time, but it might be nice to include a description of the temporal window selection in the main text as well.
Figure 7/8: One consideration which should be made here is the choice to use a contour plot for what is essentially categorical data (jet detected/not detected). Contour plots will interpolate between grid points in ways that may distort the underlying data when it is discrete instead of continuous. While the authors may find compelling reasons to keep the figures as they are, please at least consider using a grid-coloring map (i.e., pcolormesh) instead of a contour plot to avoid generating details which may not be present in the data. An exception here may be the 8-day counts in Figure 8. I'm not familiar enough with the post-processing technique used here to know whether the counts should be discrete or continuous, although I would generally expect counts to be discrete.
Line 347: Does the estimation of latitude from jet core algorithms use any interpolation to higher resolution as is common with the jet statistic algorithms? I think whether this is done on native resolution or a higher one should be explicitly stated; it may play some part in their overall discrepancy.
Figure 9: The text states (349) that Barnes and Polvani 2015 produces a single estimate, however it appears here to have a PDF. There are no colors corresponding to Barnes and Simpson or Bracegirdle; I suspect that their colors appear together as the grey vertical line. However, Barnes and Simpson should have a PDF, so I think there is an issue with the legend here. And if two metrics do fall at exactly the same location, please come up with a better way to distinguish them or perhaps make a note in the text, it is somewhat confusing as currently displayed.
Line 361: is this analysis runner available to package downloaders? I could not find it in the repo. If not, why not provide some helpful scripts, perhaps pared down versions of the ones used for this work?
Citation: https://doi.org/10.5194/egusphere-2023-661-RC1 -
AC2: 'Reply on RC1', Tom Keel, 18 Sep 2023
Thank you very much for your review and suggestions. It is clear that you have invested a lot of time, and we greatly appreciate that.
We are glad to hear that you feel the manuscript is in a near publishable format, but we understand the minor concerns you raise. Firstly, we accept your largest concern with the manuscript – a demonstration of a non-trivial use of the software. We have prepared a manuscript using jsmetircs on ScenarioMIP to look at uncertainty. Your suggestion of looking at the sensitivity of a particular metric to its parameters is an intriguing one that we shall explore during the process of curating revisions. Since your review, we have also started work on improving the documentation of the package, including broader demonstrations of the package.
Regarding your second concern, we will improve on our rationale behind the inclusion of the metrics currently in the package, as you suggest. We plan to expand Tables 1, 2, 3 & 5 by adding a column containing a count of each metric’s use in subsequent literature.
Finally, we shall include a more quantitative way of evaluating the discrepancies between metrics. We believe we can achieve this with some minor changes to the information and values given in the text, and some small extensions to the figures.
Thank you for your further comments, and we intend to address all of them.
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC2
-
AC2: 'Reply on RC1', Tom Keel, 18 Sep 2023
-
RC2: 'Comment on egusphere-2023-661', Gloria Manney, 04 Aug 2023
-
AC1: 'Reply on RC2', Tom Keel, 16 Aug 2023
Dear Gloria,
Thank you very much for your review, it is clear that you have invested a lot of time investigating the package, and we are grateful you have taken the time to do this.
You mentioned that our implementation of M11 in this package may not be the same as yours. If possible, we would be very grateful if you could provide us the code for JETPAC, so we can check that we have done it properly?
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC1 -
AC3: 'Reply on RC2', Tom Keel, 18 Sep 2023
Dear Gloria,
Thank you again for your extensive review, we are very grateful for your comments, and we hope to have addressed some of your more major concerns in this interim period.
Firstly, we agree with your primary concern about the maturity of the package and its documentation at the time of your review. We recognise that neither the usage guidelines nor the documentation provided were sufficient. During the interactive discussion, we have made significant progress offline to expand and improve the online and in-code documentation and use cases. We were also able to identify faults with the M11 algorithm that you highlight, and we hope to continue a dialogue with you to work on and verify its implementation.
In improving the documentation provided for this package, we also hope to have alleviated some of your broader concern for the implementation and interpretation of the methods. We now include detail descriptions of the methods and examples of their use within their docstring. And have also added a notes section to each method which detail more specific implementation details.
Regarding your concern for metric validation, we have been able to clean up and verify 12 of the 17 included in the package as of now. For the remaining, we will take your advice to reach out to other authors whose metrics we felt needed some clarification during the revisions process.
Finally, regarding your concern about how we distinguish between ‘jet statistics’ and ‘jet core algorithms’, we would like to highlight that this decision was made due to the dimensionality of the outputs. Whereas the jet statistics return a single value, the jet core algorithms work to provide a multidimensional mask of the outputs. Our motivation was that this would allow the user to use the mask provided by a given jet core algorithm to extract other quantities (winds, altitude, etc.), and we have now provided examples in the documentation of exactly this.
Thank you for your further comments that you provide in your review, and we intend to address all of them.
All the best,
Tom and co-authors.
Citation: https://doi.org/10.5194/egusphere-2023-661-AC3
-
AC1: 'Reply on RC2', Tom Keel, 16 Aug 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
387 | 150 | 27 | 564 | 15 | 18 |
- HTML: 387
- PDF: 150
- XML: 27
- Total: 564
- BibTeX: 15
- EndNote: 18
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Chris Brierley
Tamsin Edwards
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(6458 KB) - Metadata XML