the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Opinion: Why all emergent constraints are wrong but some are useful – a machine learning perspective
Abstract. Global climate change projections are subject to substantial modelling uncertainties. A variety of emergent constraints, as well as several other statistical model evaluation approaches, have been suggested to address these uncertainties. However, they remain heavily debated in the climate science community. Still, the central idea to relate future model projections to already observable quantities has no real substitute. Here we highlight the validation perspective of predictive skill in the machine learning community as a promising alternative viewpoint. Building on this perspective, we review machine learning ideas for new types of controlling factor analyses (CFA). The principal idea behind these CFA is to use machine learning to find climate-invariant relationships in historical data, which also hold approximately under strong climate change scenarios. On the basis of existing data archives, these climate-invariant relationships can be validated in perfect-climate-model frameworks. From a machine learning perspective, we argue that such approaches are promising for three reasons: (a) they can be objectively validated both for past data and future data, (b) they provide more direct – by design physically-plausible – links between historical observations and potential future climates and (c) they can take higher dimensional relationships into account that better characterize the still complex nature of large-scale emerging relationships. We demonstrate these advantages for two recently published CFA examples in the form of constraints on climate feedback mechanisms (clouds, stratospheric water vapour), and discuss further challenges and opportunities using the example of a climate forcing (aerosol-cloud interactions).
- Preprint
(5557 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1636', Anonymous Referee #1, 11 Jul 2024
General comments:
This Opinion paper reviews the challenges and limitations of constraining future climate response using emergent constraints and discusses an alternative approach, which combines climate-invariant controlling factor analyses (CFA) and machine learning. The authors demonstrate the advantages of CFA, along with the remaining challenges and potential applications on model tuning. Overall, the paper is well-structured, and I have no major concerns with the paper. The following comments are meant to improve the clarity of the article.
Specific comments:
- Emergent constraint is a fundamental concept for this topic, and I believe a clearer definition is needed in the Introduction section before introducing the associated limitations. The authors have provided more details when discussing the difference between CFA and emergent constraints, but I recommend adding one to two sentences in section 1.2.2.
- Figure 1: “In (b), internal variability uncertainty for individual ensemble members …” Since only one ensemble member for each model is shown, the figure technically didn’t provide any information regarding internal variability uncertainty. Alternatively, the authors may consider adding an inset figure in Fig 1b to show the internal variability for one model, or at least remove the phrase “for individual ensemble members” in the caption. In addition, I suggest adding a dashed line at year 2050 to highlight the difficulty of distinguishing projected warming by that year.
- Figure 2: The final observational constraint (delta_y_constrained combined with prediction error) is shown as light blue line in the bottom right figure, but it is different from “delta_y_constrained” (light blue color) in the equation. Please consider revising the figure to make them consistent. For instance, the black dashed distribution could be changed to a light blue dashed line, and the light blue distribution could become black.
- Figure 2: What is the temporal resolution of the observations in the top right figure? The text mentioned they are monthly-mean data but it doesn’t seem correct. Please clarify this.
Technical corrections:
- Figure 2 caption: “the violet lines the predictions of the functions are fed with the model-consistent changes in the controlling factors.” It seems like a verb is missing in the sentence. Same for Figure 4’s caption: “the solid red lines the linear regressions”.
- L256: The uncertainty arises not just from changes in cloud cover but from changes in cloud properties, including cloud height, cloud optical depth, etc. Please consider rephrasing it.
- L280: duplicated “be”
- L331: add a comma (,) after “…to be non-linear (Carslaw et al., 2013a)”
- L337: duplicated “either”
Citation: https://doi.org/10.5194/egusphere-2024-1636-RC1 - AC1: 'Reply on RC1', Peer Nowack, 17 Nov 2024
-
RC2: 'Comment on egusphere-2024-1636', Anonymous Referee #2, 30 Oct 2024
The 'emergent constraints' approach that uses model ability to simulate an 'observable' measure of the climate, e.g. variability of some quantity on relatively short time scales, as an indicator of its ability to predict an 'unobservable' measure, usually some measure of long-term climate change due to change in GHGs or similar. The approach leads to an estimate of the likely value and likely uncertainty in the 'unobservable' measure.
This article discusses the potential shortcomings of the emergent constraints (hereafter EC) approach and how an alternative approach based on machine learning, specifically using 'controlling factor analysis' (CFA) can be used to provide a more reliable estimate of an 'unobservable' measure on the basis of model simulation of observable measures.
Given that this is an opinion article, the authors have some freedom in choosing what they include and what they say. I found the article interesting and thought-provoking, so in that sense the article meets the requirements. However I think that the article could be improved in several ways and will now present several comments that might encourage the authors to make some changes.
The article starts off with a review of climate model uncertainty and the methods that might be used to reduce that uncertainty. It then focuses on emergent constraints and the limitations on those. Much of this has been well described in the Williamson et al review which the authors cite. The challenge for the authors is to convey the important points to the reader in a compact form. Sometimes I found myself puzzling over the wording chosen by the authors -- what did they really mean to say? I have made various detailed comments on that below.
The second section of the article sets out the CFA approach and illustrates it using material from two recent papers of the first author. So this is more reviewing the author's recent work than 'opinion', but it was interesting to learn more about that work. There was an initial statement about the advantages of the CFA approach -- it wasn't obvious to me that characteristics of CFA as described here were absolutely distinct from the characteristics of EC.
For example it is claimed that CFA is based on known physical relationships between predictand and predictors, whereas (perhaps) by implication EC is not. I could take the case of the Nowack et al (2023) water vapour work to examine this claim. There is indeed a known relation between temperature variations and stratospheric water vapour variation on relatively short time scales -- say decadal or less. What is not known is if the same relation holds between variations on, say, centennial time scales, because of the possible role of changes in background aerosol, changes in nature and frequency of convection, etc. The approach that was taken in the Nowack et al (2023) work was to demonstrate that in models the relation between temperature and water vapour inferred on 'observable' time scales also applied to imposed climate change (e.g. 4xCO2) -- this supported the hypothesis that a relation found on observable time scales also applied on longer timescales or equivalently for larger perturbations of the system -- and this was the basis of the approach of using model reproduction of the relation between temperature and water vapour on observable time scales as a basis for assessing its capability to reproduce the corresponding relation for climate-change perturbation. So there seems to be the same leap of faith required in this approach as in the emergent constraint approach -- that model simulation of an observable phenomenon as compared to observations can be used as a calibrator of model simulation of a climate change response. (To be sure, it is fair to say that there is a known physical relationship between temperature and stratospheric water vapour on decadal timescales and below, but I wouldn't say that there is corresponding physical relationship on longer timescales because other physical ingredients/mechanisms may be important.)
The distinctive ingredient of the CFA approach is, of course, that it does not require (or allow) the observable and the climate-change indicator to be selected separately, albeit supported by some scientific argument. But in other respects the CFA approach and the emergent constraints approach have significant common ground.
The section on 'Challenges' is a bit superficial -- a few lines on each of two topics. The section on 'Opportunities' is I guess justified on the basis of 'other ways to use ML to exploit observational constraints in assessing climate predictions' -- but there is little hint of this in the abstract and the content seems unfocused. I found it difficult to get much out of the first section, there simply were not enough details given, the second section seemed to be purely speculative and the third section seemed to have a very tenuous link to everything else in the article. To be honest Sections 3 and 4 have the flavour of 'here are a few things I have just thought of'.
The final section returns to CFA specifically and suggests, for example, that CFA might help validate emergent constraints. I think that it is worth thinking about terminology here -- and this article would be particularly useful if it encouraged well-organised terminology. CFA validating emergent constraints somehow seems at odds with the original idea that CFA is a substitute for emergent constraints. Do you recommend extending the use of 'emergent constraints' to mean the general approach of using model simulations of observed measures to validate model predictions of unobservable measures? Or would it be better to keep 'emergent constraints' for the particular class of approaches that have been used over the last decade or so, where a relation across set of models between observable measure and unobservable measure is used in conjuction with observations to validate or reject the prediction of the unobservable measure?
Some detailed comments follow.
DETAILED COMMENTS
Title: -- I'm not sure that the title makes sense. Some emergent constraints could surely be right (i.e. not wrong) -- though many might be wrong. Are you trying to say that the methodology behind emergent constraints is fundamentally unsound -- even if some happen to be right/useful by accident? (One problem being that we don't have any grounds for identifying the latter.)
l4: 'Here we highlight the validation perspective of predictive skill in
the machine learning community as a promising alternative viewpoint.' -- not entirely sure what this sentence means. Are you simply referring to the systematic process of training, then validation, as applied in ML?l7: 'to find climate-invariant relationships in historical data, which also hold approximately under climate change scenarios' -- but isn't the 'climate invariance' being tested be examining whether the relationship deduced from present (model) climate also holds for future change? So I don't understand the 'also'.
L10: 'the still complex nature of large-scale emerging relationships' -- why 'still'? And why 'emerging' -- not 'emergent' which you have used previously. Is some difference being implied?L14: 'climate forcing (aerosol-cloud interactions)' -- are aerosol-cloud interactions a climate forcing? I thought that they were more part of the response to a climate forcing.
L20: 'accelerating ... climate change projections' -- I don't understand what you mean by accelerating a projection.
l22: 'supposedly' sounds as though you don't believe that 'new influences of climate change' are actually negligible. But surely it is quite reasonably to assume that over the duration of a weather forecast -- let's say up to seasonal -- the effect of systematic climate change -- e.g. effect of increasing greenhouse gases -- is negligible.
L24: 'changing boundary conditions' -- 'do you mean changing greenhouse gases etc' -- i.e. what you have previously referred to as 'forcing'? If so then why not use that term. If not then what do you mean?
L31: 'as an alternative approach' -- to what exactly?
L62: 'internal variability uncertainty, in turn, is usually reduced by averaging responses across multiple ensemble members' -- in one sense this doesn't reduce the uncertainty -- the uncertainty (e.g. in surface temperatures over some region 20-30 years into the future -- which is, after all, the example you are showing) is irreducible -- see papers by Deser and others.
l69: 'simpler' > 'simple'? (If 'simpler' then simpler than what?)
L88-90: 'Instead a model that performs worse on certain past performance measures might actually be more informative about the true response' -- this might be true -- i.e. there might be such a model -- or it might not -- there might not be such a model. The sentence seems unrelated to the preceding and following sentences, which are both about whether past behaviour -- i.e. behaviour under present model climate is an indicator of future response -- there is nothing in either sentence about whether the future response is 'true'.
L92: 'and could even be targeted' -- I don't understand what you are saying here.
L117: 'A central aspect of emergent constraint definitions' -- would make more sense to me as 'A central hypothesis of the emergent constraint approach'
L121: 'CMIPs' -- best to define the 'CMIP' abbreviation.
L136: 'by design indirect' -- makes it sound as though EC was specifically designed to be indirect, isn't it more that it is inevitably indirect -- there is no systematic a priori method of determining what observable will give a useful relation to the climate response measure of interest.
L140: 'one can attempt to manipulate (the observed) to better match the observational record' -- not sure what you mean by 'manipulate' or 'manipulate (the observed' -- do you mean to modify the model so that its simulation of an observable measure improves relative to observations?
L141: 'away' > 'way'
L154:'correlations of this kind will always be present in big data climate archives' -- when you write 'big data' do you simply mean large datasets -- e.g. resulting from long simulations or large ensembles? Isn't the point more about the complexity of the system being modelled -- the complexity implies that there are very many possible (perhaps seemingly scientifically sensible) metrics to choose from and this means that, with finite datasets, some high correlations may arise by chance?
L162-165: To me the logic would be clearer if the two sentences were swapped (and slightly amended) -- i.e. you are claiming it as a fact based on work with CMIP archives that some emergent constraints that seemed robust in one CMIP exercise were not robust when applied in a later exercise. That indicates that the relations apparently found were statistical overconfident or coincidental.
l167: repeat definition of CFA here -- as it is a key concept
(1) When I first looked at this formula and accompanying text I interpreted it as meaning that theta represented parameters used to define f, i.e. a parametric representation where the training is used to deduce the optimal choice of theta. But then reading 'measure the importance of the controlling factor relationships found' and seeing (4) I now understand the situation as being that theta represents different measures of the dependence of f on its argument X -- i.e. once f is known (or chosen) then the theta are known. If that applies then (1) seems to be an odd way to express it.
L194: 'Expert knowledge is pivotal ...' -- I imagine that proponents of EC would say the same thing and see that as a common feature of both CFA and EC.
L210: 'CFA instead learns from internal variability and uses these relationships in a climate-invariant context to also constrain the future response, without the latter being involved in the fitting process' -- all true, but highlights some of the potential delicacy of the CFA approach. First what is generated by the learning may not have the required climate-invariant property -- i.e. for each model it fails to predict the climate-change or equivalent response -- perhaps for the reasons articulated in Section 3, or perhaps because the physics of short-term variability is simply different from the physics of longer term variation and the latter is not revealed by the training data on short-term variability. Second, whilst for models, the climate-invariant property may be verified, the applicability to the real climate may be limited because in the real climate longer term variation is determined by processes that are poorly represented in current models (but do not play a strong role in determining the nature of short-term variability). I see these as additional challenges alongside those mentioned in Section 3.
L212: 'sample size ... is no longer listed by the number of models in the ensemble' -- this is justified by I suppose by the fact that both spatial and temporal variation are being considered -- but this would be useful only if that spatial and temporal variation was helpful in characterising processes relevant to climate change responses or equivalent -- and that might not be the case.
L223-229: You introduce the important issue of the challenge of extrapolation but then don't really deal with it. The sentence about Beucler et al is simply mysterious. Then at the end of the paragraph you mention normalisation by global mean surface temperature -- but this surely isn't relevant to the extrapolation difficulty.L231: Why 'climate-invariant' not 'model-invariant'? I guess you mean climate-invariant within a given model -- i.e. you find the same relation holds for different climates within the given model.
L240: I'm confused by the introduction of m. (3) has an m on the right-hand side but not on the left-hand side. So you are saying that you find several different ways of estimating the same y in terms of the same X. I don't understand how to relate that to the 'number of different observational functions' in the final sentence of the paragraph.
Figure 3: Some parts of this Figure are taken from Ceppi and Nowack (2021) paper and are described in more detail there. I couldn't find the 'predictions on internal variability' and 'predictions under 4xCO2 forcing' graphs -- and, even if these graphs were generated by real simulation, there is scope for misinterpreting them. We are not told anything about time scales, but I suppose that the graphs extend over a few years. The lower graph shows, I guess, that within a single model, the inferred f is skilful in predicting short-term variability. The relation of this to the plot below it is weak -- the plot below is about how in longer time averages the prediction from the inferred f matches the model prediction over a largish set of models. The arrows in the Figure indicate a sort of logical flow but the logic is actually not very clear.
L314: Fueglistaler et al (2009) is a very good review of the TTL but since you are here referring specifically to the link between temperatures and stratospheric water vapour there are more directly relevant papers that could be cited -- e.g. Fueglistaler et al, 2005: Stratospheric water vapor predicted from the Lagrangian temperature history of air entering the stratosphere in the tropics J. Geophys. Res., 110, D10, D10S16, doi:10.1029/2004JD005516 is more focussed on water vapour.
Figure 5: This was intended -- I assume -- to illustrate challenges for the CFA approach, the EC approach and for other approaches. So it is unfortunate that the caption refers only to 'a typical emergent constraint'.
L347: 'however' makes sense in continuing from the last sentence of Section 3 to the first sentence of Section 4 but this seems odd -- almost as if at one point there was no section break and then one was added.
L418-419: This seems to be a bit of an ML in-joke -- not sure that many readers will get it.
Citation: https://doi.org/10.5194/egusphere-2024-1636-RC2 - AC2: 'Reply on RC2', Peer Nowack, 17 Nov 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1636', Anonymous Referee #1, 11 Jul 2024
General comments:
This Opinion paper reviews the challenges and limitations of constraining future climate response using emergent constraints and discusses an alternative approach, which combines climate-invariant controlling factor analyses (CFA) and machine learning. The authors demonstrate the advantages of CFA, along with the remaining challenges and potential applications on model tuning. Overall, the paper is well-structured, and I have no major concerns with the paper. The following comments are meant to improve the clarity of the article.
Specific comments:
- Emergent constraint is a fundamental concept for this topic, and I believe a clearer definition is needed in the Introduction section before introducing the associated limitations. The authors have provided more details when discussing the difference between CFA and emergent constraints, but I recommend adding one to two sentences in section 1.2.2.
- Figure 1: “In (b), internal variability uncertainty for individual ensemble members …” Since only one ensemble member for each model is shown, the figure technically didn’t provide any information regarding internal variability uncertainty. Alternatively, the authors may consider adding an inset figure in Fig 1b to show the internal variability for one model, or at least remove the phrase “for individual ensemble members” in the caption. In addition, I suggest adding a dashed line at year 2050 to highlight the difficulty of distinguishing projected warming by that year.
- Figure 2: The final observational constraint (delta_y_constrained combined with prediction error) is shown as light blue line in the bottom right figure, but it is different from “delta_y_constrained” (light blue color) in the equation. Please consider revising the figure to make them consistent. For instance, the black dashed distribution could be changed to a light blue dashed line, and the light blue distribution could become black.
- Figure 2: What is the temporal resolution of the observations in the top right figure? The text mentioned they are monthly-mean data but it doesn’t seem correct. Please clarify this.
Technical corrections:
- Figure 2 caption: “the violet lines the predictions of the functions are fed with the model-consistent changes in the controlling factors.” It seems like a verb is missing in the sentence. Same for Figure 4’s caption: “the solid red lines the linear regressions”.
- L256: The uncertainty arises not just from changes in cloud cover but from changes in cloud properties, including cloud height, cloud optical depth, etc. Please consider rephrasing it.
- L280: duplicated “be”
- L331: add a comma (,) after “…to be non-linear (Carslaw et al., 2013a)”
- L337: duplicated “either”
Citation: https://doi.org/10.5194/egusphere-2024-1636-RC1 - AC1: 'Reply on RC1', Peer Nowack, 17 Nov 2024
-
RC2: 'Comment on egusphere-2024-1636', Anonymous Referee #2, 30 Oct 2024
The 'emergent constraints' approach that uses model ability to simulate an 'observable' measure of the climate, e.g. variability of some quantity on relatively short time scales, as an indicator of its ability to predict an 'unobservable' measure, usually some measure of long-term climate change due to change in GHGs or similar. The approach leads to an estimate of the likely value and likely uncertainty in the 'unobservable' measure.
This article discusses the potential shortcomings of the emergent constraints (hereafter EC) approach and how an alternative approach based on machine learning, specifically using 'controlling factor analysis' (CFA) can be used to provide a more reliable estimate of an 'unobservable' measure on the basis of model simulation of observable measures.
Given that this is an opinion article, the authors have some freedom in choosing what they include and what they say. I found the article interesting and thought-provoking, so in that sense the article meets the requirements. However I think that the article could be improved in several ways and will now present several comments that might encourage the authors to make some changes.
The article starts off with a review of climate model uncertainty and the methods that might be used to reduce that uncertainty. It then focuses on emergent constraints and the limitations on those. Much of this has been well described in the Williamson et al review which the authors cite. The challenge for the authors is to convey the important points to the reader in a compact form. Sometimes I found myself puzzling over the wording chosen by the authors -- what did they really mean to say? I have made various detailed comments on that below.
The second section of the article sets out the CFA approach and illustrates it using material from two recent papers of the first author. So this is more reviewing the author's recent work than 'opinion', but it was interesting to learn more about that work. There was an initial statement about the advantages of the CFA approach -- it wasn't obvious to me that characteristics of CFA as described here were absolutely distinct from the characteristics of EC.
For example it is claimed that CFA is based on known physical relationships between predictand and predictors, whereas (perhaps) by implication EC is not. I could take the case of the Nowack et al (2023) water vapour work to examine this claim. There is indeed a known relation between temperature variations and stratospheric water vapour variation on relatively short time scales -- say decadal or less. What is not known is if the same relation holds between variations on, say, centennial time scales, because of the possible role of changes in background aerosol, changes in nature and frequency of convection, etc. The approach that was taken in the Nowack et al (2023) work was to demonstrate that in models the relation between temperature and water vapour inferred on 'observable' time scales also applied to imposed climate change (e.g. 4xCO2) -- this supported the hypothesis that a relation found on observable time scales also applied on longer timescales or equivalently for larger perturbations of the system -- and this was the basis of the approach of using model reproduction of the relation between temperature and water vapour on observable time scales as a basis for assessing its capability to reproduce the corresponding relation for climate-change perturbation. So there seems to be the same leap of faith required in this approach as in the emergent constraint approach -- that model simulation of an observable phenomenon as compared to observations can be used as a calibrator of model simulation of a climate change response. (To be sure, it is fair to say that there is a known physical relationship between temperature and stratospheric water vapour on decadal timescales and below, but I wouldn't say that there is corresponding physical relationship on longer timescales because other physical ingredients/mechanisms may be important.)
The distinctive ingredient of the CFA approach is, of course, that it does not require (or allow) the observable and the climate-change indicator to be selected separately, albeit supported by some scientific argument. But in other respects the CFA approach and the emergent constraints approach have significant common ground.
The section on 'Challenges' is a bit superficial -- a few lines on each of two topics. The section on 'Opportunities' is I guess justified on the basis of 'other ways to use ML to exploit observational constraints in assessing climate predictions' -- but there is little hint of this in the abstract and the content seems unfocused. I found it difficult to get much out of the first section, there simply were not enough details given, the second section seemed to be purely speculative and the third section seemed to have a very tenuous link to everything else in the article. To be honest Sections 3 and 4 have the flavour of 'here are a few things I have just thought of'.
The final section returns to CFA specifically and suggests, for example, that CFA might help validate emergent constraints. I think that it is worth thinking about terminology here -- and this article would be particularly useful if it encouraged well-organised terminology. CFA validating emergent constraints somehow seems at odds with the original idea that CFA is a substitute for emergent constraints. Do you recommend extending the use of 'emergent constraints' to mean the general approach of using model simulations of observed measures to validate model predictions of unobservable measures? Or would it be better to keep 'emergent constraints' for the particular class of approaches that have been used over the last decade or so, where a relation across set of models between observable measure and unobservable measure is used in conjuction with observations to validate or reject the prediction of the unobservable measure?
Some detailed comments follow.
DETAILED COMMENTS
Title: -- I'm not sure that the title makes sense. Some emergent constraints could surely be right (i.e. not wrong) -- though many might be wrong. Are you trying to say that the methodology behind emergent constraints is fundamentally unsound -- even if some happen to be right/useful by accident? (One problem being that we don't have any grounds for identifying the latter.)
l4: 'Here we highlight the validation perspective of predictive skill in
the machine learning community as a promising alternative viewpoint.' -- not entirely sure what this sentence means. Are you simply referring to the systematic process of training, then validation, as applied in ML?l7: 'to find climate-invariant relationships in historical data, which also hold approximately under climate change scenarios' -- but isn't the 'climate invariance' being tested be examining whether the relationship deduced from present (model) climate also holds for future change? So I don't understand the 'also'.
L10: 'the still complex nature of large-scale emerging relationships' -- why 'still'? And why 'emerging' -- not 'emergent' which you have used previously. Is some difference being implied?L14: 'climate forcing (aerosol-cloud interactions)' -- are aerosol-cloud interactions a climate forcing? I thought that they were more part of the response to a climate forcing.
L20: 'accelerating ... climate change projections' -- I don't understand what you mean by accelerating a projection.
l22: 'supposedly' sounds as though you don't believe that 'new influences of climate change' are actually negligible. But surely it is quite reasonably to assume that over the duration of a weather forecast -- let's say up to seasonal -- the effect of systematic climate change -- e.g. effect of increasing greenhouse gases -- is negligible.
L24: 'changing boundary conditions' -- 'do you mean changing greenhouse gases etc' -- i.e. what you have previously referred to as 'forcing'? If so then why not use that term. If not then what do you mean?
L31: 'as an alternative approach' -- to what exactly?
L62: 'internal variability uncertainty, in turn, is usually reduced by averaging responses across multiple ensemble members' -- in one sense this doesn't reduce the uncertainty -- the uncertainty (e.g. in surface temperatures over some region 20-30 years into the future -- which is, after all, the example you are showing) is irreducible -- see papers by Deser and others.
l69: 'simpler' > 'simple'? (If 'simpler' then simpler than what?)
L88-90: 'Instead a model that performs worse on certain past performance measures might actually be more informative about the true response' -- this might be true -- i.e. there might be such a model -- or it might not -- there might not be such a model. The sentence seems unrelated to the preceding and following sentences, which are both about whether past behaviour -- i.e. behaviour under present model climate is an indicator of future response -- there is nothing in either sentence about whether the future response is 'true'.
L92: 'and could even be targeted' -- I don't understand what you are saying here.
L117: 'A central aspect of emergent constraint definitions' -- would make more sense to me as 'A central hypothesis of the emergent constraint approach'
L121: 'CMIPs' -- best to define the 'CMIP' abbreviation.
L136: 'by design indirect' -- makes it sound as though EC was specifically designed to be indirect, isn't it more that it is inevitably indirect -- there is no systematic a priori method of determining what observable will give a useful relation to the climate response measure of interest.
L140: 'one can attempt to manipulate (the observed) to better match the observational record' -- not sure what you mean by 'manipulate' or 'manipulate (the observed' -- do you mean to modify the model so that its simulation of an observable measure improves relative to observations?
L141: 'away' > 'way'
L154:'correlations of this kind will always be present in big data climate archives' -- when you write 'big data' do you simply mean large datasets -- e.g. resulting from long simulations or large ensembles? Isn't the point more about the complexity of the system being modelled -- the complexity implies that there are very many possible (perhaps seemingly scientifically sensible) metrics to choose from and this means that, with finite datasets, some high correlations may arise by chance?
L162-165: To me the logic would be clearer if the two sentences were swapped (and slightly amended) -- i.e. you are claiming it as a fact based on work with CMIP archives that some emergent constraints that seemed robust in one CMIP exercise were not robust when applied in a later exercise. That indicates that the relations apparently found were statistical overconfident or coincidental.
l167: repeat definition of CFA here -- as it is a key concept
(1) When I first looked at this formula and accompanying text I interpreted it as meaning that theta represented parameters used to define f, i.e. a parametric representation where the training is used to deduce the optimal choice of theta. But then reading 'measure the importance of the controlling factor relationships found' and seeing (4) I now understand the situation as being that theta represents different measures of the dependence of f on its argument X -- i.e. once f is known (or chosen) then the theta are known. If that applies then (1) seems to be an odd way to express it.
L194: 'Expert knowledge is pivotal ...' -- I imagine that proponents of EC would say the same thing and see that as a common feature of both CFA and EC.
L210: 'CFA instead learns from internal variability and uses these relationships in a climate-invariant context to also constrain the future response, without the latter being involved in the fitting process' -- all true, but highlights some of the potential delicacy of the CFA approach. First what is generated by the learning may not have the required climate-invariant property -- i.e. for each model it fails to predict the climate-change or equivalent response -- perhaps for the reasons articulated in Section 3, or perhaps because the physics of short-term variability is simply different from the physics of longer term variation and the latter is not revealed by the training data on short-term variability. Second, whilst for models, the climate-invariant property may be verified, the applicability to the real climate may be limited because in the real climate longer term variation is determined by processes that are poorly represented in current models (but do not play a strong role in determining the nature of short-term variability). I see these as additional challenges alongside those mentioned in Section 3.
L212: 'sample size ... is no longer listed by the number of models in the ensemble' -- this is justified by I suppose by the fact that both spatial and temporal variation are being considered -- but this would be useful only if that spatial and temporal variation was helpful in characterising processes relevant to climate change responses or equivalent -- and that might not be the case.
L223-229: You introduce the important issue of the challenge of extrapolation but then don't really deal with it. The sentence about Beucler et al is simply mysterious. Then at the end of the paragraph you mention normalisation by global mean surface temperature -- but this surely isn't relevant to the extrapolation difficulty.L231: Why 'climate-invariant' not 'model-invariant'? I guess you mean climate-invariant within a given model -- i.e. you find the same relation holds for different climates within the given model.
L240: I'm confused by the introduction of m. (3) has an m on the right-hand side but not on the left-hand side. So you are saying that you find several different ways of estimating the same y in terms of the same X. I don't understand how to relate that to the 'number of different observational functions' in the final sentence of the paragraph.
Figure 3: Some parts of this Figure are taken from Ceppi and Nowack (2021) paper and are described in more detail there. I couldn't find the 'predictions on internal variability' and 'predictions under 4xCO2 forcing' graphs -- and, even if these graphs were generated by real simulation, there is scope for misinterpreting them. We are not told anything about time scales, but I suppose that the graphs extend over a few years. The lower graph shows, I guess, that within a single model, the inferred f is skilful in predicting short-term variability. The relation of this to the plot below it is weak -- the plot below is about how in longer time averages the prediction from the inferred f matches the model prediction over a largish set of models. The arrows in the Figure indicate a sort of logical flow but the logic is actually not very clear.
L314: Fueglistaler et al (2009) is a very good review of the TTL but since you are here referring specifically to the link between temperatures and stratospheric water vapour there are more directly relevant papers that could be cited -- e.g. Fueglistaler et al, 2005: Stratospheric water vapor predicted from the Lagrangian temperature history of air entering the stratosphere in the tropics J. Geophys. Res., 110, D10, D10S16, doi:10.1029/2004JD005516 is more focussed on water vapour.
Figure 5: This was intended -- I assume -- to illustrate challenges for the CFA approach, the EC approach and for other approaches. So it is unfortunate that the caption refers only to 'a typical emergent constraint'.
L347: 'however' makes sense in continuing from the last sentence of Section 3 to the first sentence of Section 4 but this seems odd -- almost as if at one point there was no section break and then one was added.
L418-419: This seems to be a bit of an ML in-joke -- not sure that many readers will get it.
Citation: https://doi.org/10.5194/egusphere-2024-1636-RC2 - AC2: 'Reply on RC2', Peer Nowack, 17 Nov 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
405 | 176 | 25 | 606 | 27 | 19 |
- HTML: 405
- PDF: 176
- XML: 25
- Total: 606
- BibTeX: 27
- EndNote: 19
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1