the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Meteorological Drivers of the Low-Cloud Radiative Feedback Pattern Effect and its Uncertainty
Abstract. The radiative feedback pattern effect remains a large source of uncertainty for both projections of future trends and interpretations of past trends in global temperature. The pattern effect is defined as the difference in feedbacks between transient and long-term simulations, and past work shows that is primarily attributed to changes in the marine low-cloud radiative feedback. Here we use low-cloud meteorological kernels to map out both the primary cloud controlling factors through which changing surface temperature patterns drive changes in low-cloud feedback, as well as the sources of model spread. We find that the pattern effect is almost entirely driven by changes in EIS in the Southern Hemisphere, particularly in the South East Pacific and Southern Ocean. In both past and future simulations, inter-model spread is primarily caused by model differences in the sensitivity of low clouds to the environmental conditions, rather than differences in the simulated evolution of environmental conditions.
- Preprint
(34016 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-3177', Anonymous Referee #1, 05 Aug 2025
The marine low-cloud pattern effect is an important and widely studied topic, but one that is far from being well understood. This manuscript analyzes this effect in detail in an ensemble of models using the cloud controlling factors (CCFs) method, and obtains results that are very clear, very interesting, and, in my opinion, represent a very significant advance. While the essential role of EIS has already been well established, the use of meteorological cloud radiative kernels, on the one hand, and the sensitivity of meteorological variables to average temperature, on the other, makes it possible to clearly separate what is related to the SST pattern from what is related to the response to this SST pattern. In my opinion, this is a very good manuscript that fully deserves to be published in ACP. I have only a few minor comments to make, which are presented below.
The difference between estimates using different data sets is mentioned in the manuscript without being really discussed. The manuscript highlights the importance of a good estimate of (\partial R)/(\partial EIS) for the pattern effect. But if we compare Figure 1b with Figures A1, it seems to me that the estimate of (\partial R)/(\partial EIS) differs significantly depending on the data set used. This is a point that could be further emphasized, as well as the importance of having a better estimate of this term based on observations, with possibly a discussion of the strengths and weaknesses of the different datasets and possible avenues for improvement.
pages 3-4, Eqs 1-3 and corresponding text: adding a subscript i to R_low would make the equations and text clearer.
l 62: dR/dTg => dR_low/dT_g ; \partial R / \partial CCF_i => \partial R_low,i / \partial CCF_i
l 69: d CCF/dT_g => d CCF_i/dT_g
Citation: https://doi.org/10.5194/egusphere-2025-3177-RC1 -
RC2: 'Comment on egusphere-2025-3177', Anonymous Referee #2, 11 Aug 2025
Tam et al. analyse a set of 16 CMIP models to study di7erences in marine low-cloud radiative feedback as well as the pattern e7ect contribution. To this end they analyse three di7erent experiments, AMIP, historical and 4xCO2. From their analysis they find the pattern e7ect to be mostly driven by Southern Hemispheric EIS changes and changes between the models to be due to di7erences in cloud sensitivity rather than di7erences in cloud controlling factor changes.
I think this is a very well done analysis tackling an important issue. However, at times I found some arguments and conclusions hard to follow. Extending some of the discussions of the results would help explain the authors reasoning. Also, a deeper discussion of previous results seems necessary to place the current results into context. While these points are important to address, I have no fundamental objections to the analysis and can therefore recommend publication after minor revisions.
Introduction
Could do with a gentler start (see below my comments under “Pattern e7ect”). Also the jump to the last paragraph is very abrupt. You didn’t really map out why what you are trying to do is important and novel. Readers that are less familiar with the field will appreciate this.I think the introduction would also benefit from expanding the discussion on what exactly this manuscript sets out to do. This is currently only covered in one sentence in the last paragraph, but seems to be covered in more detail in L. 114-117.
Pattern effect
I think the manuscript would benefit from a deeper introduction to what the pattern e7ect is (possibly in or before the current first paragraph), to help readers that are slightly less familiar with the topic.This extends to the discussion about quantifying the pattern e7ect. You mention in the abstract and in the conclusions (L.252) that “The pattern e7ect is defined as the di7erence in feedbacks between transient and long-term simulations.”. However nowhere in the introduction is this definition mentioned or discussed. A more in depth discussion on this definition and explanation of the use should be given in the introduction or method section.
Building on the previous paragraph, in 3.1.1/ 3.1.2 you argue that the change from AMIP- >historical->4xCO2_fast->4xCO2 in the model mean as well as in individual models is a measure of the pattern e7ect. I think this needs more discussion/justification as it is at the core of your analysis of the pattern e7ect. I agree with your analysis, given that the sensitivities used for the calculations are the same throughout the experiments, the changes have to come from changes in the CCFs pattern. However, this is quite some leap for readers to make, especially for those less familiar with the field. I think your reasoning should be discussed explicitly, why the di7erences in the experiments are an indication of the pattern e7ect, maybe even refer to eq. 1 for quantitative explanation.
L. 147 In line with all the above comments, this sentence is very vague.
Previous literature
The authors should incorporate more discussion on previous literature. For example, one of the main points, that the inter-model spread in feedback is mostly caused by di7erences in cloud sensitivity rather than di7erences in CCF changes is a somewhat know result (e.g. Klein et al. 2017). The same is true to an extent for the EIS changes. The authors analysis is very thorough and has (to my knowledge) not been done in this way before, but there are several related studies and it should be clarified what findings are agreeing (or disagreeing) with previous results and what findings are novel.
As a side note, given that the authors have done all this work, an extension of the analysis to provide insight into what drives the di7erences across experiments in the models seems an interesting, novel and important question. I think this would be a not too time consuming additional analysis, but I leave it up to the authors if they want to pursue this idea.
Minor
L. 12-14: Less negative over time under which conditions? 4xCO2, historical, 1%CO2 increase. All of those?L. 21: This needs a source in my opinion, also as far as I am aware not all models show this behaviour.
L.28: Again a source seems needed here
L. 67 In my mind, sensitivities are partial derivatives, since it tells me how x changes with y, all else being equal. Personally I wouldn’t use the word here. However, I don’t think there is a fixed definition, so this is just a suggestion.
L. 77 Are all these studies on these six specifically?
L. 81 While I think I know what the authors mean by “local large scale environment”, I find this expression a bit self-contradictory
L. 111 I wouldn’t call epsilon an error term. It doesn’t come from measurement errors or similar things, but from inherent covariances in the system, as the authors point out themselves.
L. 114-117 This information seems to belong (at least in parts) into the introduction rather than the method section
L. 117 Why is there a citation here? Aren’t those the goals you are stating?
L. 168 I assume ‘pattern’ refers here to the relative importance of the terms of Fig. 1b compared to the relative importance in the subfigures of Fig. A1? In that case I would use a di7erent word, as it might be misleading.
L. 173 as in L.168
L. 179 or a covariance term (epsilon). Was this checked for in strength in this analysis, similar to what was done for the pattern (Fig. 4).
L. 178-184 I would have expected you to do the decomposition according to eq.3, but you replaced the term using model average sensitivity with observed sensitivity in figure b. I understand that you probably wanted to “recycle” figure 1b and not create an almost identical figure again, but I still found it initially confusing. Also, it makes the 1b and 1c not apples to apples comparable, since the spread will be larger/smaller if the sensitivities are larger/smaller. I fully acknowledge that the result seems pretty robust in terms of where the spread is coming from, but it still initially confused me.
L. 191 Following up from the above comment (on L. 178-184), I see you address this here by pointing to A1d (btw, the d is missing). Why not cite this in the paragraph above and adjust the discussion accordingly to avoid confusion?
L. 221 I would put ‘in historical and 4xCO2’ instead of ‘across experiments’. I stumbled upon this part because I stopped reading and looked at the maps and found there to be some significant di7erence in AMIP to the other experiments (Fig. 3a compared to 3b and 3c). That threw me o7 until I continued reading to see your caveat in the next sentence.
L. 236.-238. I don’t follow how you arrive at this conclusion from the before presented results. Against what is the historical estimate biased high? Against the AMIP? And why is it biased high? Also, why is the pattern e7ect biased low? I think this part needs significant expansion to explain your reasoning.
L. 239 Maybe add a transition sentence here to help the reader understand what you now intend to have a look at.
Technical
L. 51 refer to
L. 62 R-> R_low (twice)
L. 63 being a bit picky, but i is the index, the individual CCF would be CCF_i L. 69 CCF_i
L. 89 calculating -> calculated
L. 120 would cut sub-component
L. 207 to->doFigures
Fig.1: exp-> Exp.
Fig.1: Maybe put a dot between the two terms in the titles and remove the parenthesis. Also, you could consider shortening model->m and CERES-> C to make things more readableFig.1: Personally, I would remove the bold font in the caption.
Fig.2/3: The maps are relatively small and the label for the zonal mean plots almost not readable in the printed version. For the digital version this is of course not a problem. The authors could consider rearranging the plots to make them bigger.
Fig. 4: I would suggest to call the last column epsilon, to stay consistent with eq. 3. Fig. 4: Again, personally I would remove the bold font in the caption.
Citation: https://doi.org/10.5194/egusphere-2025-3177-RC2 -
RC3: 'Comment on egusphere-2025-3177', Anonymous Referee #3, 22 Aug 2025
Review of "Meteorological Drivers of the Low-Cloud Radiative Feedback Pattern Effect and its Uncertainty" by Rachel Yuen Sum Tam and co authors for consideration in Atmospheric Chemistry and Physics.
In this study the authors study the pattern effect of low level marine clouds using a cloud controlling factor decomposition. Experiments AMIP, historical and 4xCO2 are used to estimate how the feedback depend on the time since applied forcing. While this evolution is fairly consistent and independent of radiative kernel or model, it is found that there is substantial inter model spread in how models simulate the response of clouds to atmospheric stability.
Overall, I would say the results are unsurprising, but also I haven't seen this done before so the study makes a valuable addition to the literature. My main concern with the study is that the sea surface temperatures prescribed in the atmosphere-only AMIP experiment is taken to be the true forced pattern, when in fact studies have shown that other datasets more resemble the magnitude of pattern effect simulated in coupled historical and that observations are broadly within the range of model internal variability. Hence, several of the remarks and conclusions regarding possible model biases must be adjusted or removed. When reading I was also mildly concerned with the somewhat limited/narrow selection of cited literature, so I have made some effort to include reading suggestions in the detailed comments below.
Overall, I would say my main concern is one of major weight, but is easily addressed provided the authors agree to do so. Other than this, I think the study is important and should be published.
---
Abstract+Conclusions, the authors describe clearly what was done, but I miss a bit what was learned? For me, a striking result is that the multi model mean kernel yielded feedback changes between simulations which was a bit smaller, but not very different from observational kernels. I think this is important to convey. One might even try to quantify this.
The introduction is very short. I am particularly missing a paragraph on the mechanisms underlying the patterns, and some discussion of forced vs. internal variability SST patterns. This is important for the study, since the AMIP runs are later assumed to represent a true forced fast response. But some studies indicate that the AMIP SSTs are an outlier. Furthermore the East Pacific has caught up a lot since 2014, when the AMIP experiment ends, suggesting it was partly internal variability, perhaps partially associated with the hiatus period. Some papers to include could be Clement et al. (1996), Liu (1998), Pierrehumbert (2000), England et al. (2014), Kosaka and Xie (2016), Seager et al. (2019), Watanabe et al. (2020), Heede et al. (2020), Hayashi et al. (2020), Heede and Fedorov (2021), but there are probably several others that I have missed.
13, Here you might also add Rugenstein et al. (2016, 2019, 2020), Li et al. (2012), Knutti and Rugenstein (2015).
17, Some more papers of relevance here are Olonscheck and Rugenstein (2024) and Modak and Mauritsen (2023).
25, I suggest looking at Ceppi and Gregory (2017, 2019) and Hedemann et al. (2022).
44-46, Worth mentioning that Olonscheck et al. (2020) found the observed SST patterns to be broadly consistent with model internal variability. Also observational uncertainty exist in the underlying reconstructed past SSTs, and the dataset used in the standard AMIP experiment is an extreme outlier (Modak and Mauritsen 2023). Not necessarily wrong, but also not necessarily right.
55, Nevertheless, they are no longer the main source of community assessed uncertainty (Sherwood et al. 2020, Forster et al. 2021).
83, I would say hours to days.
131, Unclear statement, try to rewrite or consider deleting.
141, Note that the pattern effect estimated from AMIP runs is an outlier, and the average SST reconstruction yields a pattern effect centered on that simulated by historical (Modak and Mauritsen 2023).
161-163, Good question whether it is a bias in models, or in observations, or simply an expression of natural variability, see several references mentioned earlier.
Section 3.1.4 One has to do quite a bit of thinking to read out the results presented in this section from Fig. 1 and A1. Would it not be feasible to condense this into an estimate of how much spread comes from kernel and CCF in each case?
209, Is there any reason this is likely? It is "possible", yes, but likely is usually meant to indicate more than 66 percent probability.
Figure 2, 3 and 4, These maps are too small and distorted, squeezed horizontally.
233-234, I am unsure if they should replicate them if it is due to errors in AMIP and/or internal variability.
236-238, I do not think this can be concluded based on the presented evidence.
270-271, Same issue, this cannot be concluded.
Citation: https://doi.org/10.5194/egusphere-2025-3177-RC3
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
437 | 48 | 15 | 500 | 13 | 10 |
- HTML: 437
- PDF: 48
- XML: 15
- Total: 500
- BibTeX: 13
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1