the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using free air CO2 enrichment data to constrain land surface model projections of the terrestrial carbon cycle
Abstract. Predicting the responses of terrestrial ecosystem carbon to future global change strongly relies on our ability to model accurately the underlying processes at a global scale. However, terrestrial biosphere models representing the carbon and nitrogen cycles and their interactions remain subject to large uncertainties, partly because of unknown or poorly constrained parameters. Data assimilation is a powerful tool that can be used to optimise these parameters by confronting the model with observations. In this paper, we identify sensitive model parameters from a recent version of the ORCHIDEE land surface model that includes the nitrogen cycle. These sensitive parameters include ones involved in parameterisations controlling the impact of the nitrogen cycle on the carbon cycle and, in particular, the limitation of photosynthesis due to leaf nitrogen availability. We optimise these ORCHIDEE parameters against carbon flux data collected on sites from the Fluxnet network. However, optimising against present-day observations does not automatically give us confidence in the future projections of the model, given that environmental conditions are likely to shift compared to present-day. Manipulation experiments give us a unique look into how the ecosystem may respond to future environmental changes. One such experiment, the Free Air CO2 Enrichment experiment, provides a unique opportunity to assess vegetation response to increasing CO2 by providing data at ambient and elevated CO2 conditions. Therefore, to better capture the ecosystem response to increased CO2, we add the data from two FACE sites to our optimisations, in addition to the Fluxnet data. We use data from both CO2 conditions of the Free Air CO2 Enrichment experiment, which allows us to gain extra confidence in the model simulations using this set of parameters. We find that we are able to improve the magnitude of modelled productivity. Although we are unable to correct the interannual variability fully, we start to simulate possible progressive nitrogen limitation at one of the sites. Using an idealised simulation experiment based on increasing atmospheric CO2 by 1 % per year over 100 years, we find that the rate of CO2 fertilisation is much lower when Free Air CO2 Enrichment data has been in the optimisation.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1743 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1743 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-360', Martin De Kauwe, 31 Mar 2023
Overall, I applaud the authors on the use of multi-observation streams, including manipulation experiments to explore the extent they can better optimise their LSM. The paper is logically put together and clearly written. I have a few potential issues but hopefully, nothing that would preclude this manuscript from being published with some revision. This is an exciting approach and you could see how it might be extended for a broader range of flux/satellite and manipulation experiments (e.g. https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.16585).One issue I have with the aims in the intro relates to the extent it makes sense to optimise against both ambient and elevated conditions (your line 77)? The underlying driving principle of the FACE model intercomparison was that the models should be able to predict eCO2 response given existing parameterisations and underlying theory. If the parameters need to be re-optimised for eCO2 conditions, this is fine, but it implies the models lack the theory to predict these changes. For example, in De Kauwe et al. 2013, we tested whether the stomatal slope changes with eCO2 (Fig 6) and found it did not (i.e. if the theory is right the ambient parameterisation should allow you to predict the eCO2 response), supporting the point I just made. This is not to say it isn't a valid question to ask / test, but I think the text requires more nuance that we currently see. For example, to achieve the improved fit when eCO2 was considered, what params had to change? More on this point below.
A second issue I have relates to the breadth of parameters to be optimised and the data constraint. For example, to what extent can we expect to constrain a range of these quantities (e.g. litterfall, leafage, LAtoSA, etc) from gross or net carbon fluxes? Clearly indirectly a quantity like the leaf-2-sapwood area affects LAI, which affects GPP, but it feels quite downstream and I wonder why we think the gross flux would offer a strong (any) constraint here? Equally, if it turns out that it does (I'm writing this comment in advance of reading the results), what does this really mean? Again not a problem, but I think as a reader I'd appreciate a sentence or two in the methods on rationale or expectations here.
A third issue and one to tackle in the discussion relates to how best to utilise FACE data. Overall, I applaud the approach taken here by the ORCHIDEE group but equally have concerns that an implication of calibrating to ORNL for *all* broadleaved sites could inadvertently impose a progressive nitrogen limitation (as observed at this experimental site) across sites where this may not be true. I think it is important to predict the response at ORNL for the right reasons (see Zaehle et al. 2013 New Phyt) rather than using this as a basis for all nitrogen-limited sites. I'd go as far as to say it may even make sense *not* to well match the decline in NPP at this site from a standard tuning exercise.
The fourth issue that I felt was lacking from the results was the link between what was tuned and the emergent results. To improve the fluxes are a similar set of params being adjusted for PFT group? Or to obtain better fluxes are the tuned params bespoke per site? Apart from the discussion about Ksoil I'm entirely unclear how and by how much the parameters have changed, this relates to my "second issue" above. As a reader, we do not get any sense of which of the params in table 1 are most important, effectively the source of the improvement is opaque. I'm left to ponder if the tuned params made logical sense or if they reflect a repartitioning of model error or uncertainty.
Fifth, in the discussion of Figure 5, you might be able to link to the Walker 2015 paper (https://agupubs.onlinelibrary.wiley.com/doi/10.1002/2014GB004995), there they also imagined what would happen to models if ORNL and duke continued for 300 years. This is similar to your idealised experiment. In particular, that paper extracts the change in N availability over time, which could be something you potentially examine in your analysis. Currently, when you optimise against FACE vs Fluxnet, you get a much lower GPP response, which leads me to wonder how you've affected N cycling cf. Walker et al.
Finally, the question that I wondered as I was reading the results: "to what extent the calibration against flux + FACE improves the estimated fluxes in Figure 1 (or not)?". I think section 3.2 & Fig 4 addresses this question but I got a bit lost as to whether the multi-site calibration was then being used to re-examine the flux predictions or not.
Minor
=====- Line 54. "experiment" - suggest you change it to experiment"s" or "one such type of experiment", currently the text implies there is only a single FACE experiment
- Line 57-59. I didn't really get the point of these two sentences, they aren't linked at all to the text. There have been various other reviews of FACE which you don't list but could do. I think you either need to link this to the existing text or omit these lines.
- Line 203. While I do take the argument being made here, I do wonder if it makes sense to calibrate against all of the fluxes. I feel like you would calibrate against TER and NEE and perhaps the assessment would be on how GPP would change. I don't necessarily have a recommended change or response needed here, but it feels intuitively wrong to calibrate against all three fluxes. The authors are far more the experts in this field though so it is up to them of course.
Martin De Kauwe
Citation: https://doi.org/10.5194/egusphere-2023-360-RC1 - AC1: 'Reply on RC1', Nina Raoult, 09 Jun 2023
-
RC2: 'Comment on egusphere-2023-360', Anonymous Referee #2, 31 Mar 2023
Raoult describes a model parameter tuning exercise using the ORCHIDEE model and Fluxnet + CO2 FACE experiments. It builds on recent work comparing model simulations to global change experiments by directly tuning the model to FACE experiments for two PFT types.
The take-home messages are that the parameter tuning with the FACE studies seems to improve some aspects of the model and not others and that tuning to the flux data alone can alter the predicted response to rising CO2 (I did find it interesting that the prior and the Flux+FACE predictions in Figure 5 were similar to the priors, suggesting knowledge of how the model responded to CO2 likely influence past parameter sets).
I appreciate the inclusion of the FlxGN-AMB simulation because without its inclusion it would be hard to separate the influence of the type of data at the FACE sites (NPP, LAI) from the use of the experiments.
I would challenge the authors to make the manuscript have more impact beyond users of the ORCHIDEE model. What can other modeling groups learn from the manuscript? As an example, the manuscript discusses how the inclusion of the nitrogen cycle influenced the capacity to tune the model to the FACE studies. However, this is not directly tested in the results in a standardized framework. Adding those results would alert other modeling groups to be cautious with optimizing the C + N model using a similar approach as a C-only model. Related, are there any suggestions about how to update the parameter tuning to work better with N cycle models. It seems the KSoil parameter was an issue because it adjusted both C and N pools. Should there be a KSoil for each soil pool so that pools that have high N mineralization rates are adjusted differently than ones with lower N mineralization rates?
Specific comments:
- More description of the ORNL and Duke FACE data is needed. How did you handle the split-plot design at Duke FACE? Where specifically did the data come from? Pulled from the table of a manuscript or from a data repository?
- The use of parameter priors isn’t clear. In standard Bayesian statistics, priors have a prior distribution. I think this prior distribution is a combination of the mean in Table 1 and the B matrix. A description of how the B matrix is created seems to be missing.
- The posterior parameter values should be added to Table 1 for the parameters that were fit.
- How is the R matrix determined?
- I recommend adding a discussion of the results in the context of Rastetter, E. B. (1996). Validating Models of Ecosystem Response to Global Change. BioScience, 46(3), 190–198. https://doi.org/10.2307/1312740.
- Line 436: the sentence talks about how the study reduces parameter uncertainty but the manuscript doesn’t actually present the prior vs. posterior parameter uncertainty or any ensemble of simulations with different parameter values from the prior and posterior distributions. The study optimizes the parameter value to be more consistent with the observations but doesn’t necessarily reduce the uncertainty.
Citation: https://doi.org/10.5194/egusphere-2023-360-RC2 - AC2: 'Reply on RC2', Nina Raoult, 09 Jun 2023
-
AC3: 'Reply on RC2', Nina Raoult, 09 Jun 2023
Publisher’s note: this comment is a copy of AC2 and its content was therefore removed.
Citation: https://doi.org/10.5194/egusphere-2023-360-AC3
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-360', Martin De Kauwe, 31 Mar 2023
Overall, I applaud the authors on the use of multi-observation streams, including manipulation experiments to explore the extent they can better optimise their LSM. The paper is logically put together and clearly written. I have a few potential issues but hopefully, nothing that would preclude this manuscript from being published with some revision. This is an exciting approach and you could see how it might be extended for a broader range of flux/satellite and manipulation experiments (e.g. https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.16585).One issue I have with the aims in the intro relates to the extent it makes sense to optimise against both ambient and elevated conditions (your line 77)? The underlying driving principle of the FACE model intercomparison was that the models should be able to predict eCO2 response given existing parameterisations and underlying theory. If the parameters need to be re-optimised for eCO2 conditions, this is fine, but it implies the models lack the theory to predict these changes. For example, in De Kauwe et al. 2013, we tested whether the stomatal slope changes with eCO2 (Fig 6) and found it did not (i.e. if the theory is right the ambient parameterisation should allow you to predict the eCO2 response), supporting the point I just made. This is not to say it isn't a valid question to ask / test, but I think the text requires more nuance that we currently see. For example, to achieve the improved fit when eCO2 was considered, what params had to change? More on this point below.
A second issue I have relates to the breadth of parameters to be optimised and the data constraint. For example, to what extent can we expect to constrain a range of these quantities (e.g. litterfall, leafage, LAtoSA, etc) from gross or net carbon fluxes? Clearly indirectly a quantity like the leaf-2-sapwood area affects LAI, which affects GPP, but it feels quite downstream and I wonder why we think the gross flux would offer a strong (any) constraint here? Equally, if it turns out that it does (I'm writing this comment in advance of reading the results), what does this really mean? Again not a problem, but I think as a reader I'd appreciate a sentence or two in the methods on rationale or expectations here.
A third issue and one to tackle in the discussion relates to how best to utilise FACE data. Overall, I applaud the approach taken here by the ORCHIDEE group but equally have concerns that an implication of calibrating to ORNL for *all* broadleaved sites could inadvertently impose a progressive nitrogen limitation (as observed at this experimental site) across sites where this may not be true. I think it is important to predict the response at ORNL for the right reasons (see Zaehle et al. 2013 New Phyt) rather than using this as a basis for all nitrogen-limited sites. I'd go as far as to say it may even make sense *not* to well match the decline in NPP at this site from a standard tuning exercise.
The fourth issue that I felt was lacking from the results was the link between what was tuned and the emergent results. To improve the fluxes are a similar set of params being adjusted for PFT group? Or to obtain better fluxes are the tuned params bespoke per site? Apart from the discussion about Ksoil I'm entirely unclear how and by how much the parameters have changed, this relates to my "second issue" above. As a reader, we do not get any sense of which of the params in table 1 are most important, effectively the source of the improvement is opaque. I'm left to ponder if the tuned params made logical sense or if they reflect a repartitioning of model error or uncertainty.
Fifth, in the discussion of Figure 5, you might be able to link to the Walker 2015 paper (https://agupubs.onlinelibrary.wiley.com/doi/10.1002/2014GB004995), there they also imagined what would happen to models if ORNL and duke continued for 300 years. This is similar to your idealised experiment. In particular, that paper extracts the change in N availability over time, which could be something you potentially examine in your analysis. Currently, when you optimise against FACE vs Fluxnet, you get a much lower GPP response, which leads me to wonder how you've affected N cycling cf. Walker et al.
Finally, the question that I wondered as I was reading the results: "to what extent the calibration against flux + FACE improves the estimated fluxes in Figure 1 (or not)?". I think section 3.2 & Fig 4 addresses this question but I got a bit lost as to whether the multi-site calibration was then being used to re-examine the flux predictions or not.
Minor
=====- Line 54. "experiment" - suggest you change it to experiment"s" or "one such type of experiment", currently the text implies there is only a single FACE experiment
- Line 57-59. I didn't really get the point of these two sentences, they aren't linked at all to the text. There have been various other reviews of FACE which you don't list but could do. I think you either need to link this to the existing text or omit these lines.
- Line 203. While I do take the argument being made here, I do wonder if it makes sense to calibrate against all of the fluxes. I feel like you would calibrate against TER and NEE and perhaps the assessment would be on how GPP would change. I don't necessarily have a recommended change or response needed here, but it feels intuitively wrong to calibrate against all three fluxes. The authors are far more the experts in this field though so it is up to them of course.
Martin De Kauwe
Citation: https://doi.org/10.5194/egusphere-2023-360-RC1 - AC1: 'Reply on RC1', Nina Raoult, 09 Jun 2023
-
RC2: 'Comment on egusphere-2023-360', Anonymous Referee #2, 31 Mar 2023
Raoult describes a model parameter tuning exercise using the ORCHIDEE model and Fluxnet + CO2 FACE experiments. It builds on recent work comparing model simulations to global change experiments by directly tuning the model to FACE experiments for two PFT types.
The take-home messages are that the parameter tuning with the FACE studies seems to improve some aspects of the model and not others and that tuning to the flux data alone can alter the predicted response to rising CO2 (I did find it interesting that the prior and the Flux+FACE predictions in Figure 5 were similar to the priors, suggesting knowledge of how the model responded to CO2 likely influence past parameter sets).
I appreciate the inclusion of the FlxGN-AMB simulation because without its inclusion it would be hard to separate the influence of the type of data at the FACE sites (NPP, LAI) from the use of the experiments.
I would challenge the authors to make the manuscript have more impact beyond users of the ORCHIDEE model. What can other modeling groups learn from the manuscript? As an example, the manuscript discusses how the inclusion of the nitrogen cycle influenced the capacity to tune the model to the FACE studies. However, this is not directly tested in the results in a standardized framework. Adding those results would alert other modeling groups to be cautious with optimizing the C + N model using a similar approach as a C-only model. Related, are there any suggestions about how to update the parameter tuning to work better with N cycle models. It seems the KSoil parameter was an issue because it adjusted both C and N pools. Should there be a KSoil for each soil pool so that pools that have high N mineralization rates are adjusted differently than ones with lower N mineralization rates?
Specific comments:
- More description of the ORNL and Duke FACE data is needed. How did you handle the split-plot design at Duke FACE? Where specifically did the data come from? Pulled from the table of a manuscript or from a data repository?
- The use of parameter priors isn’t clear. In standard Bayesian statistics, priors have a prior distribution. I think this prior distribution is a combination of the mean in Table 1 and the B matrix. A description of how the B matrix is created seems to be missing.
- The posterior parameter values should be added to Table 1 for the parameters that were fit.
- How is the R matrix determined?
- I recommend adding a discussion of the results in the context of Rastetter, E. B. (1996). Validating Models of Ecosystem Response to Global Change. BioScience, 46(3), 190–198. https://doi.org/10.2307/1312740.
- Line 436: the sentence talks about how the study reduces parameter uncertainty but the manuscript doesn’t actually present the prior vs. posterior parameter uncertainty or any ensemble of simulations with different parameter values from the prior and posterior distributions. The study optimizes the parameter value to be more consistent with the observations but doesn’t necessarily reduce the uncertainty.
Citation: https://doi.org/10.5194/egusphere-2023-360-RC2 - AC2: 'Reply on RC2', Nina Raoult, 09 Jun 2023
-
AC3: 'Reply on RC2', Nina Raoult, 09 Jun 2023
Publisher’s note: this comment is a copy of AC2 and its content was therefore removed.
Citation: https://doi.org/10.5194/egusphere-2023-360-AC3
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
417 | 145 | 28 | 590 | 19 | 17 |
- HTML: 417
- PDF: 145
- XML: 28
- Total: 590
- BibTeX: 19
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Louis-Axel Edouard-Rambaut
Nicolas Vuichard
Vladislav Bastrikov
Anne Sofie Lansø
Bertrand Guenet
Philippe Peylin
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1743 KB) - Metadata XML