the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Southern Annular Mode Persistence and Westerly Jet: A Reassessment Using High-Resolution Global Models
Abstract. This study evaluates the performance of high-resolution (grid sizes of 9–28 km for the atmosphere; 5–13 km for the ocean) global simulations from the EERIE project in representing the persistence of the Southern Annular Mode (SAM), a critical driver of Southern Hemisphere climate variability. Using the decorrelation timescale of the SAM index (τ), we compare EERIE coupled and atmosphere-only (AMIP) simulations with CMIP6 and ERA5 datasets. EERIE coupled simulations improve the long-standing biases in SAM persistence, especially in early summer, with τ values of 9–17 days compared to CMIP6’s 9–32 days. This improvement generally correlates with a more accurate climatological jet latitude (λ0) distribution in EERIE simulations than in CMIP6, but such a correlation is not robust within EERIE AMIP simulations with a well-represented jet location, suggesting other factors in play. With prescribed SSTs, EERIE AMIP show even smaller biases in both τ and λ0 than EERIE coupled runs, highlighting the critical role of SST representation. Using the same AMIP model, finer grids (9 km vs. 28 km) can further reduce τ, but the underlying cause remains unclear, likely because of potential compensation between different processes. Sensitivity experiments filtering ocean mesoscale features in SST boundary conditions suggest that mesoscale processes enhance SAM persistence by ~2 days in early summer, though this effect is clear in ensemble means at 28 km but not in the single 9-km runs. We also show that the atmospheric eddy feedback strength is a better indicator than λ0 to infer the SAM persistence, although the metric alone does not fully explain the τ differences across SST scenarios. These findings underscore the interplay of dynamic processes influencing SAM persistence and offer insights for advancing global climate model performance.
- Preprint
(1263 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-666', Anonymous Referee #1, 27 Mar 2025
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-666/egusphere-2025-666-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2025-666', Anonymous Referee #2, 03 Apr 2025
"Southern Annular Mode Persistence and the Westerly Jet: A Reassessment Using High-Resolution Models" examines the relationship between SAM persistence, the climatological jet latitude, and the classic eddy-feedback parameter in both CMIP6 models and a new suite of high-ocean-resolution models (EERIE), with some added AMIP-style simulations with one EERIE model to help interpret the effects of increased resolution. The work finds that EERIE simulations have much lower bias in the SAM timescale than CMIP6, particularly in summer (traditionally the worst season). The bias is even lower in the AMIP simulations forced by observational SSTs, which suggests that ocean-atmospheric coupling may contribute to the bias. While these EERIE simulations have a lower bias and lower resolution than most CMIP6 models, the CMIP models do not show much dependence on horizontal resolution. Instead, previously established relationships relating the SAM timescale to the jet latitude seem to hold for the CMIP models. For EERIE models, this relationship breaks down, and the eddy-feedback parameter has better correlations with the annular mode timescale. When SST gradients are reduced in the AMIP style simulations, the persistence is reduced, although the cause is unclear.
I cannot recommend the paper to be published in its current form. With substantial revision and extended analysis, it could eventually be published, but the current state of the paper presents only a very marginal advancement in knowledge in the area of SAM timescales, and the results are challenging to interpret without more context in the literature and clearer interpretive frameworks.
Despite these criticisms, the paper does a few things well. First, I think the question is well-defined: what are the impacts of high-resolution atmosphere and ocean models, and their coupling, on SAM persistence? I also think they have the data available to address this question, but it needs to be much better utilized. They outline their methodology in a very reproducible way, and generally they follow the previous literature (to a point). The writing is of good quality and reasonably easy to follow.
My major concerns are summarized below; a detailed discussion follows. The novel contributions of this work are the analysis of high-resolution simulations, the SST sensitivity experiments, and the consideration of friction to explain intermodel differences. All of these contributions require serious improvement.
Regarding the analysis of high-resolution simulations: the simulations are all short (10 years) and frequently non-stationary (spin-up) or non-overlapping with the observational record. Given the long timescales required for SAM timescale convergence, the significant impacts of non-stationarity on the estimation of the timescale, and the potential for decadal and supra-decadal variability in the feedback itself (following the jet latitude), interpreting the difference between the EERIE simulations, ERA5, and CMIP6 is very challenging. Clearly the bias is reduced, but it is not clear at present whether this is due to artefacts, random chance, or physically meaningful reductions. This problem could be partially alleviated by carrying out the bootstrapping techniques used for the reanalysis for the EERIE simulations. Longer runs/overlapping time periods would be preferable, but given the computational expense involved the current simulations might be acceptable given appropriate explanation of the caveats involved.
Regarding the SST sensitivity experiments: these simulations require much clearer justification. At the moment, there is very little literature which would suggest that mesoscale SST gradients should affect the SAM persistence. There are some possible physical arguments, but they are not given here. One might argue that the fact that they do appear to influence the timescale in some small ensembles using one model is justification enough, but without a solid hypothesis to test, there is no definitive answer about why the change in timescale appears. It is entirely possible that it is by chance (no estimate of sampling uncertainty is provided). An incomplete argument discusses the role of surface friction, but it requires more explicit discussion of possible mechanisms. Some additional analysis on why persistence changes grounded in possible physical mechanisms would drastically strengthen the paper. It also needs to be much clearer why two different types of mesoscale features are included. My understanding of the feedback literature is that there is no reason to expect different results from the two, and they provide basically identical results.
Regarding the consideration of friction: The literature has established methods for estimating surface friction using the model output available to the authors, but they are not followed here. Instead, the authors estimate the frictional contributions in a way which is difficult to connect to established theory of SAM persistence and the feedback parameter they calculate, and in a way that also cannot be interpreted easily across model resolutions. Its physical units are not transformed to be consistent with the momentum budget (their interpretive framework). Much more work needs to be done regarding friction if it is to be used to interpret these simulations.
Without significant improvements in its main areas of new contributions, the work only marginally advances knowledge in the area of SAM persistence.
Finally, the work would be much stronger if it was better contextualized in the SAM feedback literature. It follows the SAM model bias literature reasonably well. Specifically, it should consider a few key areas which have important bearing on its results: 1) the evidence that the "feedback" which appears in austral summer (the focus of this work) is likely not due to eddy-mean flow interaction but nonstationary stratospheric variability (e.g., Byrne et al. 2016). 2) the understanding of SAM feedback mechanisms [barotropic (e.g. Lorenz and Hartmann 2001, Chen et al. 2008) vs baroclinic (e.g., Robinson 2000) vs diabatic (Xia and Chang 2014, Smith et al. 2024)] and how those feedback mechanisms might explain the role of surface friction and SST gradients. 3) The evidence for propagation of the SAM and for the stronger connection between SAM propagation biases and persistence biases than for eddy-feedback biases and persistence biases (e.g., Lubis and Hassanzadeh 2023). 4) the importance of the SAM timescale for climate predictability (e.g., Simpson and Polvani 2016, Ma et al. 2017, Hassanzadeh and Kuang 2019).
Specific Comments
Line 29: Assertion "eddy feedback is a better indicator" needs more justification and/or more clarity (better in what way? In what circumstances?)
Line 30-31: "These findings…offer insights": More specific language would be stronger (what insights?)
Introduction: I think this discussion would be stronger if it included the significance of the persistence. As it stands, the section reviews the SAM and its significance for SH climate, what persistence is, some of its potential causes (non-stationarity is not discussed, see following comment), its biases in GCMs, and some potential solutions for these biases. The problem is identified, but there exists a kind of motivational gap. The papers conclusions would be strengthened for unfamiliar readers if the significance of persistence was explicitly discussed.
Lines 67-70, 79-95: An eddy-jet feedback is not the only possible source of persistence for the SAM. There is substantial literature published after the papers reviewed here which highlights the possibility for a "feedback" caused by non-stationarity induced by stratospheric forcing (Byrne et al. 2016, 2017, Saggioro and Shepherd 2019, etc.). This kind of forcing is especially important during early summer, the focus of this paper (see Byrne et al. 2016), when the stratospheric polar vortex breaks down. Thus, the feedback parameters computed here may not be responding to any internal tropospheric dynamics but the coupled troposphere-stratosphere system. The bias correction of Simpson et al. (2013a) suggests that this non-stationarity may not influence model biases, but it does not eliminate its possibility as one source of the feedback (Simpson et al. 2011 probably even supports this). Ma et al. 2017 further supports the notion of an eddy-feedback independent of non-stationarity, but a comprehensive discussion of this would improve the interpretation of the results.
Line 143: Which segments of spin-up runs are retained? How much time is allowed for equilibration before using it for analysis? Given the importance of the stratosphere for the SAM and the time for its equilibration (~ 1 year), I would hope at least the first year is excluded from the analysis
Line 148: I'm not fully convinced by this reasoning. In part, Byrne et al. (2016) show that non-stationarity does influence the calculation of eddy feedbacks, even given linear detrending. The spin-up simulations are certainly non-stationary, although that somewhat depends on whether some/how many of the initial years are omitted. The control simulations are likely not subject to this, but the historical simulations may be as well. Presumably they are more like reanalysis, but the point is that the differing periods may in fact influence the results beyond removing their climatological means (stationary or not). This non-stationary influence is notable in ERA5, where the bootstrapping Figure 2 shows substantially different decay timescales. In the case of NDJ, more likely influenced by non-stationarity, the two estimates are nearly non-overlapping. This is another argument that the analysis time period is not a trivial consideration. One way to partially address this concern is to bootstrap estimates for EERIE simulations, as done with ERA, particularly for simulations with few ensemble members (9km-AMIP in particular). Another concern is that the 10 years available for most simulations is not long enough to see strong convergence of the timescale, especially in coupled models (Gerber et al. 2008).
Lines 172-174: I would like more clarification about the choice to test the sensitivity of SAM persistence to different ocean mesoscale features. I do not understand the motivation very clearly. The zonal-mean, vertically-averaged zonal wind is a planetary-scale phenomenon, and while it is sensitive to ocean meso-scale features, I do not understand why it might be sensitive to one type over the other. The atmospheric eddies which power SAM and (potentially) its persistence are of a scale of 1000km, 10-100 times the scale of these features. While such temperature gradients can be important for lower-level baroclinicity and the organization of convection, the large-scale drivers of SAM represent a further aggregation of these smaller scale dynamics. Indeed, there is currently no proposed mechanism (so far as I am aware) which argues that SAM should respond differently to these features. The idea that high-frequency SST gradients might strengthen the boundary layer heat flux, potentially enhancing boundary layer drag and strengthening the baroclinic feedback could be one argument, but it does not differentiate between eddies and fronts. In general, these results should be discussed in light of theories for baroclinic feedbacks on SAM persistence (Robinson 2000, Zurita-Gotor 2014, Zurita-Gotor et al. 2014). Diabatic feedbacks may also play a role here (Xia and Chang 2014, Smith et al. 2024).
Line 230: "for the same date in a calendar year". I think I know what this means, but more clarity would be better
Equation (1): y seems to be year, but it is not explicitly defined. The separation of t into d, y could be more clearly explained (see previous comment)
Line 249: As mentioned previously, this should be repeated for simulations with few ensemble members (5 or less).
Figure 1: Other reviewers and readers may disagree with me, but I think Figure 1 belongs as supplemental materials. The freed up PU (publication unit) could be used much more effectively for other topics, some of which already mentioned, some to be mentioned. A very large majority of WCD readers interested in SAM and SAM timescale know what the pattern looks like, and if not, it is easily found. A more useful figure might be comparing the pattern across models. A similar argument is true for the timeseries. The raw timeseries is not relevant to the analysis being performed. Both are referenced once, only in passing. Panel c is more useful, but it is a visual explanation of e-folding time, which will be familiar to many readers, climate-oriented and not. Figure 1 could be more useful if it also depicted how the eddy feedback parameter (b) is calculated, as this is a more complex and less familiar calculation. Even with such an inclusion, I have a hard time justifying including Figure 1 in the main body of the text.
Line 265: Because many of your models have different resolutions (particularly CMIP5 vs EERIE, you mention regridding CMIP5 to the same grid, but not EERIE), I would highly suggest following Menzel et al. (2019) or Barnes and Polvani (2015) and doing quadratic interpolation around the jet maximum to define the jet latitude. This will alleviate some of the degeneracy (models with identical jet latitudes) in Figure 3 and is consistent with the literature.
Line 273: The switch from geopotential height for defining the timescale to zonal wind for defining the feedback is not without caveats. The assumption here is that the wind relevant for the SAM (and its feedbacks) is the geostrophic wind. Recently, however, Smith et al. (2024) demonstrate that SAM has significant eddy-feedbacks from the ageostrophic momentum fluxes which are leading-order in DJF in MERRA2. Vishny et al. (2024) also find important contributions to persistence from the ageostrophically-driven mean meridional circulation in idealized simulations. Thus, the imputation that models whose decay timescale is based on geopotential height will be consistent with the feedback from full (geostrophic+ageostrophic) zonal wind is probable, but not guaranteed. I think there is enough literature supporting the use of both (geopotential height and zonal wind) methods that it is not reasonable to redo the ACF calculations using zonal wind, but I do think it is worth acknowledging the geostrophic assumption and its limitations.
Line 278: I think the choice of three levels by Simpson et al. (2013b) was not intended to be the ideal, rather it was the best available information at the time (CMIP3). The vertical structure of SAM can be quite nuanced despite its barotropic nature (Wall et al. 2022, Sheshadri et al. 2018), and a significant fraction of the eddy momentum flux necessary for the feedback exists above 250 hPa (Nie et al. 2014, Sheshadri et al. 2018). I suspect much more than those three levels are available for CMIP6, and there inclusion would strengthen this analysis.
Line 300: The assumption of the Simpson framework is that the PCs are uncorrelated. Sheshadri and Plumb (2017), Lubis and Hassanzadeh (2020), Lubis and Hassanzadeh (2023), and Smith et al. (2024) have all shown this is not the case. Specifically, Sheshadri and Plumb (2017), Lubis and Hassanzadeh (2020), and Lubis and Hassanzadeh (2023) have shown that the coupling between EOF1 and EOF2 influences the SAM persistence timescale and the estimation of the eddy feedback parameter, and that the SAM timescale in CMIP6 models shows a strong dependence on the strength of the coupling between EOF1 and EOF2 (as measured by SAM's propagation period, see Lubis and Hassanzadeh 2023, Figure 7). Without examination of the coupling between modes across models, the spread in eddy-feedback parameters is difficult to interpret.
Line 310-325: I have two concerns involving the friction term. First, more could be done to properly estimate it and, second, utilize it in the interpretation of the results. I will begin with its estimation. Given τ as the surface stress, one can estimate the resulting torque as d(ρ-1τ)/dz (see Vallis 2006, eq. 2.270), ρ being density. If you only have τ at the surface, because you are vertically-integrating it, you can simply use the surface value and divide by the depth (in meters) of the atmospheric column, as the turbulent stress is likely zero at the model top. This faux-integration also yields a net negative sign (since the stress decreases with height), and should be of the right units (N/m2/kg*m3/m=m/s2) and the correct sign. This approach should still be approximately valid in the case that the "surface" stress is actually the output turbulent stress from the boundary layer scheme for the full boundary layer.
However, the friction in Lorenz and Hartmann (2001; and in other studies building on this framework) is generally parameterized as Rayleigh drag with a constant damping timescale. LH2001 explain in their Appendix A how to estimate it from timeseries of m and z. Since both of these fields are used in this analysis, it should be possible to estimate a friction via Rayleigh drag. This has two key benefits: 1) it can be used to validate the friction estimated from the stress, and triple checked against the residual of the momentum budget, evaluated from your equation (3), which should also be possible. In my experience, the residual usually matches the Rayleigh drag quite well. The second benefit is that it is useful for the interpretation of the feedback parameter, which I discuss more later.
A final issue with the estimation of the friction (no matter which method, preferably at least 2 of the 3) is that its projection value is proportional to the square root of the number of latitudes, and thus its magnitude should not be compared directly with simulations with different horizontal resolution. This is true for all the budget terms, but the feedback parameter is resolution-independent because it involves the ratio of two budget terms. To understand why, consider a simplified version of your equation (2) where W = I (the identity). If e is a (square-) integrable function f(λ), sampled on an equally spaced grid (reasonable for GCM output), its Euclidean norm will be proportional to the square-root of the integral of [f(λ)]2 over latitude (λ), divided by the grid spacing (since we multiplied by the grid spacing to convert the sum into an integral). The integral should converge to the same value regardless of the resolution for most smoothly-varying, well-resolved f (again reasonable at even coarse GCM resolutions). However, the inverse of the grid spacing is proportional to the number of latitudes N (if the grid is evenly spaced). Thus the norm of e is proportional to √N. The multiplication of Xe is proportional to N (not the square root), by the same logic (because e is an orthogonal basis, the only component of X that survives the integration is proportional to e, and the product is proportional to ee). However, Xe has no square root, and thus Xe/√ee is proportional to √N. See a small example which should generalize well as the attached image. Note that including a non-identity weighting matrix W≠I does not change this, it simply adds another term into the integration. One could divide by √N to alleviate this, or use integrals in the top and bottom instead. Or, one could divide the friction by the zonal wind projection as done for the feedback parameter. At that point, you may as well compute the damping timescale following the literature (LH2001, Appendix A).
The second friction-related issue is with the interpretation of the feedback. Following LH2001, the eddy feedback parameter (b) lengthens the effective timescale for the SAM by tf/(1-b*tf), where tf is the frictional timescale. Thus, both the eddy feedback and the frictional timescale can effect SAM's persistence, and if models have differing frictional timescales, it could also explain differences in their persistence. In theory, one could see if this effective timescale tf/(1-b*tf) followed the autocorrelation timescale more closely (I suspect it would), but the model bias literature (Gerber et al. 2008, Kidston and Gerber 2010) generally does not follow this convention, so I don't think this is strictly necessary. However, it may give a better interpretive framework for the friction to plot the frictional timescale (rather than the projection) and use this LH2001 relation to explain how the frictional timescale interacts with the eddy-feedback parameter to determine the total timescale.
Figure 2 (caption): Please describe the violin plot in more detail; I don't believe they are common enough to assume they can be interpreted properly without explanation.
Lines 396-398: I think this point on the interpretation of the IFS-AMIP experiments requires more discussion and computation. These are an IC ensemble from the same GCM with the same boundary conditions, and thus represent internal variability of the same mean climate in a way that isn't the case for comparisons across the CMIP models. For example, you could likely run 5 more IC ensembles, and you might get a completely different pattern between jet latitude and e-folding timescale. But I don't think that somehow contradicts the expectation that the two should be positively correlated due to the stronger wave reflection (and weaker feedback) of more poleward jets (Barnes and Hartmann 2010, Lorenz 2023). Despite this, according to the convergence estimates of Gerber et al. (2008), the 40 years of AMIP simulations should be enough to constrain the decorrelation timescale within a day, and 4 of the ensemble members are within one day of their mean. This is where I think bootstrapped estimates of the sampling uncertainty could help resolve this question of whether sampling uncertainty can explain the lack of relationship, or whether this is indeed a breakdown of the expected theory.
Figure 3: When uncertainty exists in both the dependent and independent variables of a regression, it may be more appropriate to use a different type of regression than least-squares, especially if the uncertainties are correlated (see Pendergrass and Kao 2022, and York 2004 for an alternative).
Line 415: Sample size of one, not enough evidence to support conclusion (bootstrapping would help)
Lines 425-430: Some connection to existing feedback mechanisms would be appropriate here
Lines 441-449: See previous comments regarding friction
Line 462: Is the convection parameterization turned off at 9km? Stronger latent heating in the 9km run could create a stronger negative diabatic feedback (Xia and Chang 2014), decreasing the persistence
Figure 4: Panels (f), (g) and (h) should be greatly simplified, maybe down to one panel (or even a table), showing the simulation on one axis and the value of the x axis on the other. The decay timescales are identical, and two points is not enough to infer any relationship, so the current scatter plots visually complicate the comparison between simulations. Readers will understand why they do not follow (b), (c), and (d), no need to artificially fit that pattern.
Line 492: For the 10 year, coupled EERIE simulations, I'm not convinced this is long enough to really reduce the sampling uncertainty, which converges very slowly (see Gerber et al. 2008). Bootstrapped measures would help alleviate this concern; without such attempts, it is hard to interpret the difference between the EERIE simulations and the longer CMIP6 simulations (and the longer IFS-AMIP simulations for that matter).
Line 540: please clarify: better indicator of what?
Line 541: "more statistically significantly" If I recall, while the p-value was small, the result was not significant. I think significance is too binary for this language. I would stick with language which discusses what a small p-value means (the relationship is unlikely to be due to random chance).
Line 522: I'm not convinced the path forward is that promising from these results. A higher resolution atmosphere helps. That is good. But it does not seem to benefit from being coupled (bias improves in AMIP) and it does not seem to benefit from mesoscale ocean features (smoothed SST runs have lower bias). Improvements in jet latitude at these resolutions do not seem to help either. However, the climate community will want to run coupled models for the estimation of climate variability and sensitivity for the foreseeable future. If other models behave like IFS (a big assumption), it is likely models will be stuck with some irreducible bias in SAM timescale. Perhaps I am too pessimistic. If so, please help me understand what other path these results suggest.
Technical Corrections
Lines 73, 90, 240: Simpson 2013 referenced without a/b
Lines 376-379: This sentence "However, it is also possible… more critical" could benefit from more clarity, including maybe breaking into smaller sentences.
Lines 511-514: Rephrasing (and separating into smaller sentences) would improve clarity here
References
Barnes, E. A., & Hartmann, D. L. (2010). Testing a theory for the effect of latitude on the persistence of eddy-driven jets using CMIP3 simulations. Geophysical Research Letters, 37(15). https://doi.org/10.1029/2010GL044144
Barnes, E. A., & Polvani, L. M. (2015). CMIP5 Projections of Arctic Amplification, of the North American/North Atlantic Circulation, and of Their Relationship. Journal of Climate, 28(13), 5254–5271. https://doi.org/10.1175/JCLI-D-14-00589.1
Byrne, N. J., Shepherd, T. G., Woollings, T., & Plumb, R. A. (2016). Annular modes and apparent eddy feedbacks in the Southern Hemisphere. Geophysical Research Letters, 43(8), 3897–3902. https://doi.org/10.1002/2016GL068851
Byrne, N. J., Shepherd, T. G., Woollings, T., & Plumb, R. A. (2017). Nonstationarity in Southern Hemisphere Climate Variability Associated with the Seasonal Breakdown of the Stratospheric Polar Vortex. Journal of Climate, 30(18), 7125–7139. https://doi.org/10.1175/JCLI-D-17-0097.1
Chen, G., Lu, J., & Frierson, D. M. W. (2008). Phase Speed Spectra and the Latitude of Surface Westerlies: Interannual Variability and Global Warming Trend. Journal of Climate, 21(22), 5942–5959. https://doi.org/10.1175/2008JCLI2306.1
Gerber, E. P., Voronin, S., & Polvani, L. M. (2008). Testing the Annular Mode Autocorrelation Time Scale in Simple Atmospheric General Circulation Models. Monthly Weather Review, 136(4), 1523–1536. https://doi.org/10.1175/2007MWR2211.1
Hassanzadeh, P., & Kuang, Z. (2019). Quantifying the Annular Mode Dynamics in an Idealized Atmosphere. Journal of the Atmospheric Sciences, 76(4), 1107–1124. https://doi.org/10.1175/jas-d-18-0268.1
Kidston, J., & Gerber, E. P. (2010). Intermodel variability of the poleward shift of the austral jet stream in the CMIP3 integrations linked to biases in 20th century climatology. Geophysical Research Letters, 37(9). https://doi.org/10.1029/2010GL042873
Lorenz, D. J. (2023). A Simple Mechanistic Model of Wave–Mean Flow Feedbacks, Poleward Jet Shifts, and the Annular Mode. Journal of the Atmospheric Sciences, 80(2), 549–568. https://doi.org/10.1175/JAS-D-22-0056.1
Lorenz, D. J., & Hartmann, D. L. (2001). Eddy–Zonal Flow Feedback in the Southern Hemisphere. Journal of the Atmospheric Sciences, 58(21), 3312–3327. https://doi.org/10.1175/1520-0469(2001)058<3312:EZFFIT>2.0.CO;2
Lubis, S. W., & Hassanzadeh, P. (2020). An Eddy–Zonal Flow Feedback Model for Propagating Annular Modes. Journal of the Atmospheric Sciences, 78(1), 249–267. https://doi.org/10.1175/JAS-D-20-0214.1
Lubis, S. W., & Hassanzadeh, P. (2023). The Intrinsic 150-Day Periodicity of the Southern Hemisphere Extratropical Large-Scale Atmospheric Circulation. AGU Advances, 4(3), e2022AV000833. https://doi.org/10.1029/2022AV000833
Ma, D., Hassanzadeh, P., & Kuang, Z. (2017). Quantifying the Eddy–Jet Feedback Strength of the Annular Mode in an Idealized GCM and Reanalysis Data. Journal of the Atmospheric Sciences, 74(2), 393–407. https://doi.org/10.1175/JAS-D-16-0157.1
Menzel, M. E., Waugh, D., & Grise, K. (2019). Disconnect Between Hadley Cell and Subtropical Jet Variability and Response to Increased CO2. Geophysical Research Letters, 46(12), 7045–7053. https://doi.org/10.1029/2019GL083345
Nie, Y., Zhang, Y., Chen, G., Yang, X.-Q., & Burrows, D. A. (2014). Quantifying barotropic and baroclinic eddy feedbacks in the persistence of the Southern Annular Mode. Geophysical Research Letters, 41(23), 8636–8644. https://doi.org/10.1002/2014GL062210
Robinson, W. A. (2000). A Baroclinic Mechanism for the Eddy Feedback on the Zonal Index. Journal of the Atmospheric Sciences, 57(3), 415–422. https://doi.org/10.1175/1520-0469(2000)057<0415:ABMFTE>2.0.CO;2
Saggioro, E., & Shepherd, T. G. (2019). Quantifying the Timescale and Strength of Southern Hemisphere Intraseasonal Stratosphere-troposphere Coupling. Geophysical Research Letters, 46(22), 13479–13487. https://doi.org/10.1029/2019GL084763
Sheshadri, A., & Plumb, R. A. (2017). Propagating Annular Modes: Empirical Orthogonal Functions, Principal Oscillation Patterns, and Time Scales. Journal of the Atmospheric Sciences, 74(5), 1345–1361. https://doi.org/10.1175/JAS-D-16-0291.1
Sheshadri, A., Plumb, R. A., Lindgren, E. A., & Domeisen, D. I. V. (2018). The Vertical Structure of Annular Modes. Journal of the Atmospheric Sciences, 75(10), 3507–3519. https://doi.org/10.1175/JAS-D-17-0399.1
Simpson, I. R., Hitchcock, P., Shepherd, T. G., & Scinocca, J. F. (2011). Stratospheric variability and tropospheric annular-mode timescales. Geophysical Research Letters, 38(20). https://doi.org/10.1029/2011GL049304
Simpson, Isla R., & Polvani, L. M. (2016). Revisiting the relationship between jet position, forced response, and annular mode variability in the southern midlatitudes. Geophysical Research Letters, 43(6), 2896–2903. https://doi.org/10.1002/2016GL067989
Simpson, Isla R., Hitchcock, P., Shepherd, T. G., & Scinocca, J. F. (2013). Southern Annular Mode Dynamics in Observations and Models. Part I: The Influence of Climatological Zonal Wind Biases in a Comprehensive GCM. Journal of Climate, 26(11), 3953–3967. https://doi.org/10.1175/JCLI-D-12-00348.1
Simpson, Isla R., Shepherd, T. G., Hitchcock, P., & Scinocca, J. F. (2013). Southern Annular Mode Dynamics in Observations and Models. Part II: Eddy Feedbacks. Journal of Climate, 26(14), 5220–5241. https://doi.org/10.1175/JCLI-D-12-00495.1
Smith, S., Lu, J., & Staten, P. W. (2024). Diabatic Eddy Forcing Increases Persistence and Opposes Propagation of the Southern Annular Mode in MERRA-2. Journal of the Atmospheric Sciences, 81(4), 743–764. https://doi.org/10.1175/JAS-D-23-0019.1
Vallis, G. K. (2006). Atmospheric and Oceanic Fluid Dynamics (1st ed.). Cambridge University Press.
Vishny, D. N., Wall, C. J., & Lutsko, N. J. (2024). Impact of Atmospheric Cloud Radiative Effects on Annular Mode Persistence in Idealized Simulations. Geophysical Research Letters, 51(15), e2024GL109420. https://doi.org/10.1029/2024GL109420
Wall, C. J., Lutsko, N. J., & Vishny, D. N. (2022). Revisiting Cloud Radiative Heating and the Southern Annular Mode. Geophysical Research Letters, 49(19), e2022GL100463. https://doi.org/10.1029/2022GL100463
Xia, X., & Chang, E. K. M. (2014). Diabatic Damping of Zonal Index Variations. Journal of the Atmospheric Sciences, 71(8), 3090–3105. https://doi.org/10.1175/JAS-D-13-0292.1
York, D., Evensen, N. M., Martı́nez, M. L., & De Basabe Delgado, J. (2004). Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics, 72(3), 367–375. https://doi.org/10.1119/1.1632486
Zurita-Gotor, P. (2014). On the Sensitivity of Zonal-Index Persistence to Friction. Journal of the Atmospheric Sciences, 71(10), 3788–3800. https://doi.org/10.1175/JAS-D-14-0067.1
Zurita-Gotor, P., Blanco-Fuentes, J., & Gerber, E. P. (2014). The Impact of Baroclinic Eddy Feedback on the Persistence of Jet Variability in the Two-Layer Model. Journal of the Atmospheric Sciences, 71(1), 410–429. https://doi.org/10.1175/JAS-D-13-0102.1
Citation: https://doi.org/10.5194/egusphere-2025-666-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
140 | 17 | 7 | 164 | 8 | 6 |
- HTML: 140
- PDF: 17
- XML: 7
- Total: 164
- BibTeX: 8
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|---|---|---|
United States of America | 1 | 40 | 25 |
China | 2 | 32 | 20 |
United Kingdom | 3 | 18 | 11 |
Germany | 4 | 16 | 10 |
France | 5 | 8 | 5 |
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
- 40