the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
On the foundation of the α-β-γ approach to carbon-climate feedbacks
Abstract. The α-β-γ approach used to quantify the size of the feedbacks between climate and carbon cycle consists of two elements: the α-β-γ formalism expressing the feedback strength by the sensitivities α, β, and γ, and an experimental practice to determine these sensitivities from Earth system model simulations using a transient scenario where CO2 is forced to rise far above its pre-industrial value. There are several reasons to be unsatisfied with this approach: the α, β, and γ sensitivities are introduced as linear expansion coefficients into the forcing and thus should be characteristics of the considered model as such, but they are known to be non-constant in time and to depend on the simulation scenario used to determine their values. Moreover, being linear, the whole approach should be valid only for sufficiently small forcing, so that the practice to calculate the sensitivities at maximum forcing reached in the simulations is rather questionable. Finally, the definition of the sensitivities as linear expansion coefficients into the forcing turns out to be inconsistent with the practice to apply the formalism to transient simulations: we demonstrate that, because of the internal memory of the Earth system, by such a definition all sensitivities are mathematically zero and thus not well defined. But as we show here, the whole approach can be justified when introducing the α, β, and γ sensitivities from the outset not as differential, but as difference quotients. In this way a linearization is not needed and one obtains a fully non-linear description of the feedbacks. Moreover, thereby the formalism can be extended to include also the synergy between the feedbacks so that it gets even exact. Nevertheless, the scenario and time dependence remain, being a necessary consequence of the application of the formalism to transient simulations. In this respect the α-β-γ approach to climate-carbon feedbacks differs from the well-known description of atmospheric feedbacks: in the latter case not transient, but equilibrium states are employed to quantify the feedbacks, a practice consistent with a linearization into the forcing; accordingly, the obtained sensitivities, as well as the feedback strengths calculated from them, are proper characteristics of the system, independently of how the equilibrium was reached. This would also be the case for the calculation of climate-carbon feedbacks by the α-β-γ formalism if one used equilibrium instead of transient simulations to compute the sensitivities. In the light of these results we discuss in the outlook the pros and cons of various options for future research on the size climate-carbon feedbacks, including also the application of the generalized α-β-γ framework to obtain insight into the memory structure of the climate-carbon system.
Status: open (until 25 Jan 2026)
- RC1: 'Comment on egusphere-2025-5133', Anonymous Referee #1, 06 Jan 2026 reply
-
RC2: 'Comment on egusphere-2025-5133', Chris Jones, 12 Jan 2026
reply
Review of “On the foundation of the alpha-beta-gamma approach to carbon-climate feedbacks” by Reick & Mendonca
This manuscript delves into the applicability of the alpha-beta-gamma framework commonly used to quantify carbon cycle feedbacks in Earth System Models (ESMs). It provides a history of the approach and discusses its limitations, and challenges the notion that a linearised framework like this can be applied to transient simulations like the 1% simulation.
The feedback framework was developed by Friedlingstein et al (2003) and has been in widespread use since then through various generations of C4MIP up to and including CMIP6 and will likely also be applied to CMIP7 simulations over the coming year or two. It was developed partly inspired by the classical feedback framework for measuring climate feedbacks, but is applied to a transient simulation rather than a steady-state one in which the forcings, feedbacks and response have come to a new equilibrium.
This new manuscript assesses the validity of this approach and the validity of referring to alpha-beta-gamma as a “linearised” feedback framework. It finds that strictly, alpha beta and gamma are not well defined quantities if seen as linear expansion coefficients, but can still be useful if seen as difference quotients.
Overall I find the manuscript well written, logically laid out, and for a paper with a high quantity of mathematical analysis, very well explained to the reader.
I have a few comments that I hope will make the manuscript and its recommendations clearer, but mainly recommend acceptance with minor revisions.
Chris Jones
My interpretation of your study is that maybe the previous analyses listed here as “wrongly” stating linear feedback approach, are not necessarily wrong, but more the case they have rather lazily used the phrase “linear”. It is valuable to bring a level of robustness to the definition, but the use of the 1% simulations still appears valid. Your figure 2 panels c,d,e (alpha, beta and gamma) actually show a relatively straight line for each. For example, plotting delta-C against delta-T (“gamma”) we see a very close to straight line relationship, and gamma is the gradient of this line. It is thus (for this scenario) approximately constant in time and its definition as DC/DT is a good measure of sensitivity of the carbon cycle to climate change. This straight line is likely what many studies refer to as “linear” even though you show this is not the same as having a linear expansion of a feedback. In this sense, the current use of the 1% runs to measure a system sensitivity (the same applies to alpha and beta – also approx straight line responses) – at least up to 2xCO2 (it may break down at higher levels) remains valid. I think this is what you mean by saying that alpha-beta-gamma can still be valid as “difference quotients” (i.e. the gradient of these straight lines) rather than linear expansion coefficient.
My main comment is that it would be useful to give more depth to the physical interpretation of the feedbacks and processes. We should remember the intended use of this analysis and therefore what we learn about the Earth System from these experiments. So overall, while I like the mathematical rigour and analysis you bring, I miss a little the improved utility of your suggestions. While we can of course change our analysis, what will be the benefit? We can (and likely will) continue to use alpha-beta-gamma in the 1% runs to both see how models have evolved since earlier generations, and also to explore policy relevant metrics like TCRE. If we adopt instead you recommendations what will be able to do that is new or improved (other than a moral high ground of being more mathematically correct!).
With this in mind, it is not fully clear to me what you are saying we should (or shouldn’t) be doing next. The manuscript does not say that alpha-beta-gamma should not be applied to CMIP7 simulations, and it is very likely that it will be. So maybe the recommendation here is that we should be more careful in presentation of this and not refer to it as a linearised feedback analysis, but more a measure of system sensitivity? It remains important to separate the sensitivity of the climate-carbon cycle system to CO2 and climate and we know that these metrics combine to inform us of TCRE (the Transient Climate Response to Emissions) which is a key determinant of the remaining carbon budget. So can you precise exactly how you recommend doing this?
The study suggests other options
- Firstly that the analysis could be applied to a steady state response. It’s not fully clear exactly what such an experiment would look like. For the climate system, a 2xCO2 or 4xCO2 provides this framing – a top-of-atmosphere imbalance of energy is imposed and the system comes to a new steady state as loss of energy increases to balance the imposed forcing. Is it the recommendation that a step-CO2 change also provides an analysis of a new steady state carbon cycle? The land and ocean sinks will eventually saturate and we can define the response in the same way as the thermal system? As with the climate system, we know this will take many centuries, so we may want an equivalent of a “Gregory plot” to help estimate an effective equilibrium response? It may be beyond the current study to fully define any new analysis, but maybe a few words of what you expect it to look like would be useful – now is timely to test new experiments with some fast ESMs before a full C4MIP set of CMIP7 experiments is decided.
- Secondly the study propose a more generalised alpha-beta-gamma analysis based on Mendonca et al. This is well noted and C4MIP will explore how to incorporate this into plans for CMIP7.
Specific comments
- Line 27 – you say “particularly the ocean sink” is expected to weaken. I’m not sure why just the ocean? Results show land sinks also weaken. In fact under stabilisation or overshoot scenarios the ocean sink may persist longer than the land in many models.
- Line 155 – end of the background section – I like this overview. A nice summary. You could add a little here maybe that the use of the BGC as the reference in Friedlingstein 2003 led to maybe an over-focus of attention on the gamma term as _the_ carbon cycle feedback. The parallels with climate sensitivity led to this, but in that case the background stabilising feedback (Planck feedback of a warming black body) was well known. One key outcome of Gregory et al 2009 was the emphasis on the beta term – for the carbon cycle the stabilising effect (response of sinks to CO2) is actually both larger AND MORE UNCERTAIN than the climate response. Hence the reference to pre-industrial allows a more balanced view of both parts of the response.
- Section 4 (“Critique”) – you say that there is a strong assumption that global T is a measure of “climate changes” (including precip etc). Yes, this is vital to understand. It is a (relatively) well justified assumption because we know climate patterns scale well with global T (certainly in a 1% run). But it is actually a very poor assumption when we compare the COU and BGC runs, because the delta-T in the BGC run has very different characteristics from that in the radiatively forced run. Literature on this is lacking, but it is one reason why C4MIP does not attempt a more thorough removal of the small warming in the BGC runs. Because the associated gamma is very different and cannot be transferred between simulations.
- Figure 2c – can you clarify units here? The x-axis is in “ppm”, but the inset I think uses units of GtC for the CA. Could these be made the same? Otherwise by eye the inset does not appear to agree with the main plot
- Also on figure 2 – given climate variability, I wonder if a 20-year mean could be applied (maybe in addition to annual data)? That at least then reflects more what we would measure in our quoting of alpha/beta/gamma
- On the discussion of the synergy term. Sections 5/6, eqn (29) etc. As above, the maths here is robust, but we should also remember the science and physical understanding of what we are analysing and _why_. So in the case of the synergy term, yes it could be measured as a separate term, or included in the BGC term. But as you say, by design in C4MIP we include it in the RAD term, by defining gamma from COU-BGC instead of the RAD simulation. The reason for this is that there is a physical difference (not just a numerical artefact) in what these represent. The RAD run (climate change with no elevated CO2) measures how much carbon is lost due to warming. The COU-BGC pairing measure how much enhanced uptake is hindered by warming. This is quite a different thing and is actually what we want to know – “to what extent will climate change affect the carbon sinks due to elevated CO2”. Hence, our definition of gamma from COU-BGC is by design what we want to measure.
Citation: https://doi.org/10.5194/egusphere-2025-5133-RC2
Viewed
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 190 | 0 | 3 | 193 | 0 | 0 |
- HTML: 190
- PDF: 0
- XML: 3
- Total: 193
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
I read this paper with great interest and recommend this for publishing with minor corrections. I also attach with this review my handwritten comments as a PDF file.
I have three primary comments. First, to improve the manuscript for folks who are not well-versed with concepts of mathematics. Second, include some additional math and equations to help provide more context (I have made some suggestions in my handwritten comments). Third, and this is the most important, please clarify what linearity in this context actually means, and how it is or not related to system memory/inertia. The following comments will become clear when viewed together with my handwritten notes.
What is linearity – does this refer to the linear term in Taylor expansion of the differential quotient (dX/dF) (which author show is absent and vanishes to zero), or does it refer to equation (26)? The first half of the paper critiques linearity assumption only in the end to show that equation (26) works. So clearly, I have missed or misinterpreted something as a reader.
There’s also the question of memory versus linearity. I would like to suggest the authors to take this opportunity and make it clear that memory or inertia comes into picture (through convolution) as soon as F (time invariant) becomes F(t) (i.e. time variant), assuming the math I figured out in my handwritten notes is correct.
Authors make their case for the simple forcing function F(t)=at, but it seems that their arguments are true even for F(t)=at^2, and for F(t) = F(0) 1.01^t – F(0) (i.e. the exponentially increasing CO2 in the1pctCO2 run). I strongly suggest that authors include these two cases to strengthen their case.
I was able to follow the abstract only after reading the entire manuscript. So perhaps try to make the abstract somewhat easy to follow for readers who are not well-versed with math and systems language. For example, while I use difference and differential quotients in my day-to-day work, I wasn’t aware that’s what they are called. So perhaps use ΔX/ΔF and dX/dF explicitly in the abstract. Please also make these terms clear in the text and how they are not the same.
Although this is already done in the concluding paragraphs of the last section, perhaps it can be made more clear by making an explicit suggestion that 2xCO2 equilibrium runs can be used to quantify carbon/climate feedbacks similar to the Hansen approach (assuming this is what the authors are essentially suggesting). The feedbacks will then not be affected by the long-time scales of the C cycle processes in land and the ocean. Correct?