the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of North Atlantic Tropical Cyclones in a Convection-Permitting Regional Climate Simulation
Abstract. This study employs a 20-year, convection-permitting (CP) regional climate model, forced by ERA5 reanalysis, to assess the representation and climatology of tropical cyclones (TCs) over the North Atlantic. We demonstrate that, relative to observations from Hurricane Database version 2 (HURDAT2), the model better captures TC frequency, averaging 12.25 TCs a year compared to 12.5 in HURDAT2 and 7.45 in ERA5. The model also successfully resolves the upper tail of the observed TC intensity distribution, while ERA5 only resolves TCs of Category 2 or lower intensity on the Saffir-Simpson scale. By contrast, the model and ERA5 show comparable skill at resolving the overall distribution of TC central pressure, implying that minimum central pressure may be a skillful predictor of TC intensity for coarser datasets. Spatially, the CP model exhibits particular added value over data-sparse coastlines in Central America and the Caribbean, successfully resolving clusters of TC track density that are missing in ERA5. Finally, a composite analysis of the 10 strongest TCs in each dataset, along with a case study of Hurricane Isabel (2003), reveals that TCs in the CP model have realistic structural features of the TC inner core that are not apparent in ERA5, including a more compact and intense radius of maximum wind. This is likely due to the CP model’s enhanced capability to capture small-scale convection and storm structure. These improvements exemplify the represented CP model’s efficacy for TC-induced local-scale hazard preparedness, and risk assessment of critical infrastructure, especially in regions lacking existing high resolution climate data.
- Preprint
(1967 KB) - Metadata XML
-
Supplement
(159 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-1805', Anonymous Referee #1, 16 Jul 2025
General comments
This study compared Tropical Cyclone (TC) statistics of 20 years from three datasets, HURDAT2 (observation-based track data), ERA5 (reanalysis), and ADDAv2 (ERA5 dynamically downscaled to 4 km). Although the results are not unexpected, the comparison is informative and I think they should be published. I also do notice anything fundamentally problematic with the analyses presented in the manuscript, although I consider inter-dataset comparison of Hurricane the analysis of Isabel to be questionable.
The manuscript however suffers from severe presentation issues. These must be addressed before it can be considered for publication.
- The methodology must be clearly described. Sufficient details must be provided such that readers should be able to more-or-less reproduce the results. This is a very severe issue and needs to be addressed.
- The figures should be upgraded to publication quality. They are of insufficient resolution, and it is difficult to make out the details particularly in the smaller panels. There are some photo-editing artifacts. The font-size should be large enough such that all text is readable, i.e. generally comparable to the size of the main text. Abbreviations, lines, and other marks in the figures should be clearly described in the captions.
- The statistics described by descriptive statements should be summarised in a table or tables. When similar types of comparisons are made, the same set of statistics should be consistently calculated. The presentation through Figures should also be consistent. An example would be the presentation of mean annual TC counts, standard deviation, specific peaks, and seasonal cycle (L197-207). The counts of different intensity categories however were only presented with a statement about peaks (L214-218). It is certainly not necessary to describe everything, but the information should be made available either in Tables either in the main text or the supplement.
- Many statements in the manuscript compare the three datasets. Some are supported with statistical tests, but some are not. As the interannual variability of TCs is large, statements comparing diagnostics using mean values should be considered in view of statistical testing, standard error, confidence interval, or some other method to interpret if there are really notable inter-dataset differences.
- Please check that acronyms, mathematical symbols, and non-standard diagnostics have been described in the text.
I would also urge further investigation to support some statements in the text, when it is relatively straightforward. Not everything needs to be presented as that would make the manuscript too long, but the findings could be summarised, or the material could be added in the supplement.
Specific comments
1. L115-129 (Section 2.1): Description of ERA5 is needed. In particular, the spatio-temporal resolution including vertical levels of the data used.
2. L123: Description of HURDAT2 are needed with relevant citations.
*3a. L138-139: Why was this unusual shape chosen? My guess would be to select for TCs close to the America coastline. The lower boundary is however rather irregular. If it is meant to follow the coastline, then why does the lower left corner include parts of the Pacific? The upper right corner extends to around longitude 30degW which seems to lie beyond the ADDA_v2 domain, while the bottom right corner seems to be right at or extend past the ADDA_v2 domain. The north boundary of the United States should lie just south of 50degN, but Fig1a shows it at 40degN; the southern tip of Greenland is around 60degN, but extends well south of that in Fig1a. It appears that Fig1a is not a regular lat-lon projection but the X and Y ticks are those of a lat-lon projection. Also see minor comment 9b.
3b. L138-139: Is the shapefile the same as the ADDA_v2 domain? Otherwise the shapefile cannot restrict tracks to the ADDA_v2 domain, only to the “study domain”. It must be clear to the reader when the different domains are referred to in the text. I would like to know where the study domain lies in reference to the ADDA_v2 domain.
4. L142-143: Please show the 12degN latitude line in Fig 1. What about the treatment of the eastern boundary of ADDA_v2? WRF uses a Lambert Conformal projection and without seeing the properly plotted ADDA_v2 domain I find it difficult to consider the boundary treatment.
5. L143-144: Do you mean tracks entering into the study domain at latitudes above 35degN, and tracks inside the study domain that exceed latitude 50degN?
6a. L145-147: Please state what upper latitude threshold “Stansfield et al. (2020)” used. Do you mean Stansfield and Reed (2021) which uses a limit of 40degN, because Stansfield et al. (2020) uses a limit of 20degN? Stansfield et al. (2020) cited in L132 provides the parameters used in TempestExtremes to extract TC tracks, but since the parameters differ in this manuscript, it would be appropriate to state what was used similarly, perhaps in an Appendix or the Supplement.
Stansfield, A. M., & Reed, K. A. (2021). Tropical cyclone precipitation response to surface warming in aquaplanet simulations with uniform thermal forcing. Journal of Geophysical Research: Atmospheres, 126(24), e2021JD035197.
*6b. L145-147: I don’t understand how selection of the upper boundary at the end of the TC lifetime (as it may transition into an extra-tropical cyclone) has nothing to do with the intensification of the TC at the start of its lifetime. If the ADDA_v2 domain does not include all of the MDR, TCs at a later stage of development would still enter the RCM domain from the boundaries, though would probably cause distortion of the TC that is a different issue from the selection of the upper latitude limitation.
Reasoning aside, this statement contradicts the manuscript’s earlier statement that the ADDA_v2 domain provides “ample space for TCs entering the [ADDA] domain to intensify” (L91). Is there or is there not ample space for TCs to intensify before reaching the study domain?
Does “we expect more recurving TCs to enter the domain later in their lifecycle” mean the study domain, and if so please explain why this would result in the need for the upper latitude limit to be increased.
7. L154-156: Is counting only done for TCs inside the study domain? I presume so but please clarify this in view L158-159 (major comment 8).
8. L158-159: Are TCDs calculated when TCs exist inside the ADDA_v2 domain, or the study domain?
9. L167: Please define vmax, describe how vmax is calculated from the gridded data, in correspondence to what variable is used in HURDAT2, if vmax is different from Vm in L176. Is ACE calculated only for TCs inside the study domain. I presume so but please clarify this in view L158-159 (major comment 8). Was ACE only calculated TCs of at least strength, since this was not stated in Equation 1?
10. L173-175: Vm is the maximum sustained 1-minute wind-speed. This variable is provided in HURDAT2, but please describe how this is obtained from ERA5 and ADDA_v2. ERA5 10 m wind is instantaneous wind. WRF output instantaneous wind by default though it can be configured otherwise, so please clarify what it is in ADDA_v2. Refer to Fig 4 of Atkinson and Holland (1977) (see minor comment 11 regarding the reference year). Please also describe the treatment of the different temporal resolutions of HURDAT2 and ERA5/ADDA_v2.
*11. L176: Atkinson and Holland (1977) examined the WPR relationship for the western North Pacific, and made a comment regarding the North Atlantic, “For example, the average environmental pressure near the region of maximum tropical cyclone activity in the North Atlantic is about 10 mb higher than the corresponding area in the western North Pacific”. This does not mean that 1020 mb is the appropriate value to be used. Holland (2007) states “the 1010 is an assumed constant environmental pressure valid for the western North Pacific, but generally utilized elsewhere as well”, and fitted a good match with 1015 mb.
From Fig. 13c, the choice of 1020 mb seems excessive. The slope dVm/dPmin should approach zero as Pmin approaches Penv. Fitting can be repeated with different values of Penv to determine the best fit and most suitable Penv.
Holland, G. (2008). A revised hurricane pressure–wind model. Monthly Weather Review, 136(9), 3432-3445.
*12. L180: Please discuss the choice of 2.5 degree bins. Supporting statements were made in L258-260 and L284-286, but this is should be discussed properly in one place at the methodology section. This is important because many of the downstream results depend on this choice. What would happen with 5.0 degree bins? I agree that small bins would be excessively influenced by individual tracks, so in this case this forms the basis of a quantitative selection criterion for bin size. For example, a minimum required sample size per bin can be determined (excluding bins that simply have no samples), and the bin size can be increased until this criterion is met. Alternatively, make a plot of the minimum sample size in any bin as the binsize changes. Etc.
13. L188: Was the processing carried out using the NodeFileCompose function of TempestExtremes? As with major comment 6a, if this is the case it would be appropriate to state what was used similarly, perhaps in an Appendix or the Supplement. Please clarify whether each TC was rotated to direction of propagation. Bengtsson et al. (2007) did not perform this rotation, while Catto et al. (2010). Both are cited, so this needs to be clarified. If rotation was done, please confirm that TC-RADAR was also rotated.
*14. L192-193: Is Hurricane Isabel one of the 10 composited TCs? The reason for picking Hurricane Isabel is that it “stands out as one of the well-resolved hurricanes in ADDA_v2”. This case-study is then used to demonstrate ADDA_v2’s superior performance compared to ERA5. The selection of an example where ADDA_v2 performed well cannot be used to prove that ADDA_v2 performed well.
The use of Hurricane Isabel is questionable especially in view of the potential errors in TC-RADAR prior to 2010 (L411-412). There are a number of Category 6 TCs after 2010, such as Irma (2017), Maria (2017), Dorian (2019). Please explain why Hurricane Isabel was analysed instead of these strong post-2010 TCs.
15. L197-200: Since ADDA_v2 sometimes over-detects TCs. I would be interested to see the root mean squared error (RMSE).
16. L200-203: Since ERA5 is a reanalysis and ADDA_v2 is dynamically downscaled from ERA5, I would like to ascertain the manuscript’s statements on the ability of ERA5 and ADDA_v2 to capture interannual variability in TC count, through the correlation of the time series of annual counts. ADDA clearly overcounts in 2006, 2005, 2015, 2017, 2018. Is the comparable standard deviation between ADDA_v2 and HURDAT2 due to these excessive years? In this case the overall statistic may reflect a good comparison but the reason for the good statistic may be really be due to good performance.
17. L234: It is difficult to conclude from Fig. 3 that TCD of high-intensity systems (which intensity category?) are under-represented relative to observations. Please support this statement with relevant details.
18. L234-235: Please clarify this statement “This may stem from the absence...” with respect to L91.
19. L235-236: Please clarify this statement. I do not understand it. Is the analysis done only for the blue polygon of Fig. 1b, over all three datasets? Is this describing systems that initialise inside the blue polygon? If so, it should be straightforward to differentiate between these systems and systems from outside.
20. L237-242: Please clarify what boundary and terrain effects artificially extend the tracks of landfalling TCs in the Sierra Madre mountains. It is quite convoluted to first “hypothesise [the effects] may inflate TCDs at TS intensity and below”, then without even investigating whether this is or isn’t a problem, proceed to “retain these inland tracks while acknowledging their contribution” which would be a hypothetical contribution.
ERA5 being a reanalysis and ADDA_v2 having downscaled ERA5, should produce systems that match HURDAT2 systems. In the case of ERA5 and possibly ADDA_v2, some weaker HURDAT2 systems may become too weak to detect. On the other hand false positives may be detected by TempestExtremes, which are tracks in ERA5 or ADDA_v2 with no corresponding track in HURDAT2. TCD can be calculated for matching tracks that are nicely recurving over the sea, or excluding landfalling into the southern domain edge, or various combintions to test the manuscripts hypotheses.
21. L243: See major comment 6a. If there are excessive extra-tropical cyclones, perhaps the northern latitude limit should be reduced. ERA and ADDA_v2 tracks can also be filtered when the HURDAT2 status flag changes to EX, for the corresponding tracks.
*22. L269-273: Binning TC propagation vectors seems to be a questionable practice to me, since there is an inherent assumption that the TCs falling into the bin have similar vectors, such that the mean across TCs reflects some common characteristic of the TCs in this position. This is justified if the tracks being averaged are of a similar type, e.g. all recurving. Otherwise, I would raise a hypothetical case where a recurving TC with a northeast-ward translation vector is averaged with a non-recurving TC with a west-ward translation vector, resulting in a northwest-ward translation vector. It is questionable then to draw conclusions of TC track patterns from the this methodology. I think it would be better to either plotting the tracks directly, or first grouping the tracks into similar types before averaging the translation vectors. As there can be a lot of tracks, intense systems (based on HURDAT2) can be selected for, and the corresponding tracks plotted for the three datasets.
23. L275: There is no information supporting the more variable translation vectors north of 40degN, at least such a conclusion is highly subjective if it done by just visually looking at the Fig. 4g-i
24. L281: There should be corresponding systems across three datasets, despite misses and false alarms in ERA5 and ADDA_v2. It is expected that ADDA_v2 tracks will not match HURDAT2 tracks. If this is a sufficiently severe issue, then the analysis of a single event (Hurricane Isabel) is not warranted. The composite is arguably warranted with the argument that the differences statistically average out. Please decide whether this is an issue, and formulate the manuscript accordingly.
25. L287-289: I am not sure if I understand this statement correctly. Please clarify why ERA5’s global domain is the cause of it faithfully capturing upper-level steering flow. I would think the cause is data assimilation in ERA5, since global domain climate models without data assimilation (DA) also develop their own internal variability. Similarly, ADDA_v2 develops its own internal variability due to the lack of DA or nudging, not because of its “limited” domain. If the cause is the domain size, numerical weather prediction regional models would not work.
26. L311: I do not understand how there is exponential decay (which I do not see anyway), when the variables have been fitted to a power law.
27. L312-322: As pointed out in major comment 11, more investigation is needed into the choice of proper Penv used in the fits. Furthermore, uncertainty estimates should be given for the parameters A and B produced by the fits. I would like to review this section after it has been revised. Please also refer to minor comments 28 on the presentation of Fig. 5.
28. L331-332: I would like to see the PDF or box-whisker plots of ACE, as well as the RMSE.
29. L334: As with major comment 16, I would like to see the correlation.
30. L336-337: I do not understand this statement. Please clarify what pattern is similar between the two, since annual accumulated ACE shows a time pattern (but no spatial pattern), but spatial distribution of ACE shows a spatial pattern (but no time pattern).
31. L338-339: The values of 0.24 and 0.2 seem small, is this significant.? Given 0.2 is displayed to one decimal place, 0.24 displayed to one decimal place would also be 0.2, making them the same.
32. L339-340: I think you refer to the local maximum in the spatial pattern. ERA5 is much weaker but from the faint colour difference I can still see a local maximum, it is just that it is much weaker.
33a. L359: What is the vertical resolution of ERA5? This seems to arise over about 3 vertical levels and 3-4 radial points. I would like to know if two warm cores are found in all/most of the TCs used in the composites, or only a few.
33b. L360-262: I do not think it is appropriate to draw conclusions about TCs above category 2 based on composites of just the 10 strongest TCs. If a double warm core in ERA5 were to be found in a composite of TCs above category 2, this may be a suitable statement.
34. L395-397: Tracks are only plotted starting from Sep 14. Is this when the tracks from all three datasets enter the blue polygon of Fig 1b, or is this just a choice of starting time for the plots? Hurricane Isabel also reached Cat 5 on Sep 11 and Sep 13. Since tracking was done in the whole ADDA_v2 domain, I would like to see the track positions, maximum wind speed, and minimum SLP for times when tracks are in the ADDA_v2 domain.
35. L428-429: The secondary circulation can be more directly checked by examining the radial velocity, instead of just “suggesting” this is the case.
36. L427-441 and 454-461: This seems to be a plausible explanation but the presentation can be improved. If the cause is the enhanced representation of small-scale convection in ADDA_v2 the explanation can start here and the chain of effects that finally end with stronger winds and more robust storm structure. The statements about wind shear (442-452) and storm damage potential (L432-435) can should be separated and not mixed in with the explanation.
37. L449-450: What is the objective criterion to judge whether the shear difference is insignificant, or not?
38. L516: This is a very vague statement. What are the other differences in model configuration that warrant further investigation?
39. L528-530: I question such a claim. ADDA_v2 covers only 20 years which is I would consider insufficient long for the calculation of the typical N-year return events that concern critical infrastructure engineers. ERA5 is unable to simulate strong winds, but what advantages does ADDA_v2 have over HURDAT2 which contains observations over a much longer time period? Furthermore, while ADDA_v2 has a better tail than ERA5, the extreme right tail is still not well-reproduced, and it is this tail that is important in the calculation of return values.
Minor comments / Technical corrections
1. L32: What is IPCC?
2. L71: What is ERA, specifically what is ERA-Interim?
3. L75: What is ERA5?
4. L75: What is HRCONUS?
5. L90: What is this domain? Is it the region shown in Fig 1a, and if so please reference. Providing a longitude-latitude bounds in the text would also be helpful.
6. L115-122: Please clarify that this describes ADDA_v2, e.g. (from L115) “The Argonne Downscaled Data Archive V2 (ADDA_v2) is a continental-scale, CP regional model product (Akinasola et al., 2024)...”. Data temporal resolution (hourly) should be stated.
7. L127: What is NOAA?
8. L138-139: Reference Fig 1b. The text uses the term “study domain” while the figure uses the term “tracking domain”, while. Please stick to one consistent terminology in the manuscript. It should probably be “study domain”, because “tracking is conducted over the full domain of each dataset” (L138), so the “tracking domain” differs for each dataset.
*9a. Figure 1: Panels a and b do not show the same longitude-latitude extents, nor use the same projection. It is thus difficult to judge the location of the “study domain” with respect to the ADDA_v2 domain. There is in fact no need for two panels, since the study domain and 35deg line can be plotted on top of the topography. Either that, or draw the ADDA_v2 domain and topography in Fig 1b.
9b. L151-152: The trajectories are TCs so how can they then not be considered TCs if they do not enter the tracking domain?
9c. Figure 1: In panel b, the legend in the figure should be described in the text, e.g. “The dark blue polygon ...”., “The brown dashed line ...”.
9d. Figure 1: Image processing artifacts can be seen in the form of grey lines in the image.
10. L166: Bell et al. (2000) is not found in the References.
11. L173: Is this Atkinson and Holliday (1977)?
12a. L188: Which are the 10 TCs composited? Please confirm whether they are the same 10 systems across all three datasets.
12b. L188: There is only one Bengtsson et al. (2007) in the Reference, so “2007b” should not be used. I think the reference in L576-578 should instead be
Bengtsson, L., Hodges, K. I., Esch, M., Keenlyside, N., Kornblueh, L., Luo, J. J., & Yamagata, T. (2007). How may tropical cyclones change in a warmer climate?. Tellus A: Dynamic meteorology and oceanography, 59(4), 539-561.
13a. L203: The “mean seasonal cycle” is not shown in Fig. 2b. The distribution was plotted and the median is shown. Either mark the mean value in Fig. 2b, or change the text.
13b. Figure 2b: Only June to November are shown. If the authors do not wish to plot these months because there are no counts in the other months, please clarify this either in the main text or the figure captions.
13c. Figure 2: The Legends of Figs 2a and 2b should list dataset in the same order. Please describe the legend in the caption text, e.g. “Red bars ...”, “Blue bars...”, etc.
13d. Figure 2: It seems appropriate to show the distribution of annual counts too, i.e. summarise Fig 1a in a comparison similar to Fig2b but for the whole year.
14a. L215: Is this “Figure 3 a-c”?
14b. L220: Is this “Figure 3 d-f”?
14c. Figure 3: Do any of the tracks include systems of below TS intensity? If so please show the counts too.
14d. Figure 3: particularly Fig. 3a presents similar information to Fig 2, but is presented in a different manner. It is difficult to make a proper comparison datasets using stacked plots. Hence, I would like to see the counts of different TC intensity compared between datasets as in Fig 1a, as well as an overall climate comparison as suggested in minor comment 13d.
14e. Figure 3: The Legends should described in the figure caption.
14f. Figure 3: The y axis label of panel d has been cropped off. Please fix this.
17a. L233: Please explain what parameters can be tuned, and what diagnostics indicate that resolution refinement can improve ADDA_v2’s what type of performance.
18. L238: This filtering criteria is not described in the methodology.
19. L 256: What is RMSE?
20. L246: Please define the diagnostics precision difference, mean error, modeling yield, Q95 difference in the methodology section.
21. L263: Please describe how translation vectors are calculated in the methodology section.
22. L264: See minor comment 8 regarding the use of “tracking domain”. Is this the blue polygon in Fig 1b?
23a. L267: Please clarify whether the “eastern domain boundary” the same as the “eastern edge of the tracking domain” in L264.
23b. L267: “possible boundary noise” is a very vague term. Please clarify what this means. Does it mean that ADDA_v2 tracks enter the blue polygon at different positions compared to HURDAT2 and ERA5? This is expected because ADDA_v2 was not nudged in the interior domain. There is no need for this to be “possible” because the track positions of corresponding systems can be found for the three datasets and compared.
24. L295: Please describe how maximum 10 m wind speed along the TC track is produced from ERA5 and ADDA_v2 in the methodology section.
25. L298-300: Such point exclusion should be stated in the methodology section, not here.
26. L303-305: Please define the diagnostics skewness error and overlapping ratio in the methodology section. Is “skew error” the same as skewness error, and if so please stick to one term.
27. L308-209: This seems apparent for ERA5, but it is not clear for ADDA_v2 without calculating the area under the PDF curve in each category.
28a. Figure 5: The dashed lines indicating the intensity category should be labelled with what intensity category they delimit. These lines should also appear in panel c.
28b. Figure 5. Fit parameters a and b should be A and B.
28c. L327-328: Is it “... the integral of the overlapping area between the ...”?
29. L332: Please check units “kt2”.
30. L333: I think 2006, 2008, 2015, 2017, or 2018 better illustrate the point.
31a. Figure 6: Legends should be described in the figure captions.
31b. Figure 6: R should be defined.
32. L430: Only the panels abde of Fig 10.
33. L438: Is this Figs 10cf?
34. Figure 8: The color scheme for HURDAT2, ADDA_v2, and ERA5 are different from the previous figures. Please use the same color scheme for all figures.
35. Figure 9: I think it would be better if panels a-g and panels j-o are split into two separate figures.
36. L522: Please clarify “the different pathways between the two storms”. I do not understand this statement. Isn’t there only one storm (Hurricane Isabel)? What are the “different pathways”?
Citation: https://doi.org/10.5194/egusphere-2025-1805-RC1 -
RC2: 'Comment on egusphere-2025-1805', Anonymous Referee #2, 05 Aug 2025
This manuscript evaluates the 20-year convection-permitting simulation "ADDA v2", forced by ERA5, regarding North Atlantic tropical cyclones (TCs). The study thoroughly compares ADDA v2 against HURDAT2 best-track data and ERA5 reanalysis for TC frequency, seasonality, intensity, spatial distribution, structural characteristics, and includes an insightful detailed case study of Hurricane Isabel (2003). The findings indicate that ADDA v2 successfully reproduces observed annual TC counts, resolves Category 4 TC intensity not captured by ERA5, provides significant added value over data-sparse regions such as the Caribbean and Central American coasts, and resolves important inner-core structures absent from ERA5. A valuable process-based analysis demonstrates these improvements are due to enhanced boundary-layer inflow, increased surface enthalpy fluxes, and strengthened secondary circulation within the convection-permitting model.
The manuscript is well organized, clearly written, and contributes significantly to the understanding of convection-permitting simulations of tropical cyclones. However, several key points require clarification and further discussion before acceptance. Therefore, I recommend a moderate revision, with the following specific suggestions:
Specific Comments:
-
Tracking algorithm uncertainty: The study exclusively employs the TempestExtremes (TE) tracker for TC detection. It is recommended that the authors briefly summarize or discuss how alternative tracking methods or varying TE thresholds might impact the TC count and intensity results. Additionally, the authors should explicitly address whether ERA5’s inability to produce Category ≥3 TCs could partly be related to limitations in detection algorithms rather than solely due to ERA5’s inherent resolution limitations.
-
Atmosphere-only configuration: Given that ADDA v2 is an atmosphere-only simulation with prescribed SSTs, the authors should explicitly state this setup clearly and briefly discuss potential impacts of omitting air-sea coupling processes. Specifically, the authors should address if the lack of coupling could contribute to the inability of the simulation to reproduce Category 5 intensity TCs.
-
Boundary data dependency: The simulation is solely driven by ERA5 boundary conditions. It would be beneficial to briefly discuss how utilizing alternative reanalysis datasets (e.g., MERRA-2, JRA-55) might influence the reproduction of higher-intensity TCs (Category ≥3). Providing references to existing literature that has evaluated other reanalyses in this context would enhance the manuscript and offer useful guidance for future work.
Citation: https://doi.org/10.5194/egusphere-2025-1805-RC2 -
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
308 | 74 | 14 | 396 | 38 | 10 | 25 |
- HTML: 308
- PDF: 74
- XML: 14
- Total: 396
- Supplement: 38
- BibTeX: 10
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1