the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Opening Pandora's box: How to constrain regional projections of the carbon cycle
Abstract. Climate projections from global circulation models (GCMs) part of the Coupled Model Intercomparison Project 6 (CMIP6) are often employed to study the impact of future climate on ecosystems. However, especially at regional scales, climate projections display large biases in key forcing variables such as temperature and precipitation, which hamper predictive capacity. In this study we examine different methods to constrain regional projections of the carbon cycle in Australia. We employ a dynamic global vegetation model (LPJ-GUESS) and force it with raw output from CMIP6 to assess the uncertainty associated with the choice of climate forcing. We then test different methods to either bias correct or calculate ensemble averages over the original forcing data to constrain the uncertainty in the regional projection of the Australian carbon cycle. We find that all bias correction methods reduce the bias of continental averages of steady-state carbon variables. Carbon pools are insensitive to the type of bias correction method applied for both individual GCMs and the arithmetic ensemble average across all corrected models. None of the bias correction methods consistently improve the change in carbon over time, highlighting the need to account for temporal properties in correction or ensemble averaging methods. Some bias correction methods reduce the ensemble uncertainty more than others. The vegetation distribution can depend on the bias correction method used. We further find that both the weighted ensemble averaging and random forest approach reduce the bias in total ecosystem carbon to almost zero, clearly outperforming the arithmetic ensemble averaging method. The random forest approach also produces the results closest to the target dataset for the change in the total carbon pool, seasonal carbon fluxes, emphasizing that machine learning approaches are promising tools for future studies.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(8361 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8361 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
AC1: 'Comment on egusphere-2022-623', Lina Teckentrup, 25 Jul 2022
We unfortunately noticed we included a wrong version of figure 2 after submission. The analysis in the manuscript is based on the correct figure, which we attach in this comment. We are sorry for any inconveniences!
Lina Teckentrup (on behalf of the co-authors)
-
RC1: 'Comment on egusphere-2022-623', Anonymous Referee #1, 13 Sep 2022
Any bias in climate forcing directly influences the model projection of carbon cycle dynamics. Teckentrup et al. quantify the impacts of different bias correction (including univariate correction, multivariate correction, model averages, and random forest method) methods on improving the outputs of carbon stock changes from a dynamic global vegetation model (LPJ-GUESS). This draft was well-written, but I still have some comments on the algorithms used in this analysis, and I think the novelty is insufficient for a paper in ESD.
Major comments:
- The biggest concern is that after reading I still have no idea which bias correction method should be used to assess the spatial variability or short-term and long-term temporal variation in the total carbon stocks. The results are quite confusing. It would be good to evaluate the classifications of correction methods by function
- The authors should perform a synthetic analysis and evaluation. The current results are very preliminary.
Specific comments:
Fig 1: It would be good to differentiate steps and the name of methods in each step. Can use different icons or colors.
Table 2: Some of these selected metrics reflect the same (similar) property. For example, all the Root mean squared error, Normalised Mean Error, and Mean bias error indicates the bias in mean value. So the model with good skill in simulating mean value tends to have a higher rank. It is unfair.
Ln148: Why use the correlation of 0.3 as a threshold to select the models?
Ln305-314: Please clarify which meteorological forcing influence the mean value of C_total, the short-term variability (i.e., inter-annual variability) in C_total, and the long-term variability in C-total.
Ln320: In Fig3, only squares and circles indicate a larger bias in mean PPT after multivariate bias correction. Why?
Ln350: The authors should give a summarized metric showing which bias correction method is better. It is difficult to find the best model by eyes.
The spatial patterns of bias and CV of C_total simulated by the model in Fig 5 and Fig 6 have a clear and strange strip with extreme values. This is not reasonable. Could you please explain why this strip exists?
Ln374-375: It is not clear why C4 grasses would have a higher CV. The authors did not convince me that this is the real reason.
Ln379-380: The authors did not explain why the bias in C_total relates to foliar projective cover/ Could you please show the relation between C_total and foliar projective cover? Which factors or processes can influence foliar projective cover in the LPJ-GUESS model?
Ln415-417: The peak of seasonal GPP was underestimated a lot. Is this because the peak of meteorological variables (like precipitation or temperature) was underestimated and uncorrected?
Ln421-422: The bias in the dry season seems very small. So the effects of correcting data in the dry season may not be very useful?
Ln442-445 The introduction of the importance of Australia for the estimation of global land carbon sink should not be in the Discussion. Can put it into the Introduction.
Ln469-470: Don’t repeat the results of the analysis in the Discussion.
Citation: https://doi.org/10.5194/egusphere-2022-623-RC1 - AC2: 'Reply on RC1', Lina Teckentrup, 10 Oct 2022
-
RC2: 'Comment on egusphere-2022-623', Anonymous Referee #2, 22 Sep 2022
Review of egusphere-2022-623
Opening Pandora’s box: How to constrain regional projections of the carbon cycle
By Teckentrup et al.
In this study, the authors analyze the impact of varying meteorological forcing obtained from the historical CMIP6 GCMs / ESMs on the historical carbon cycle. More specifically, they asses the impact of the selection of the simulated meteorological forcing on the response of the Australian carbon cycle using different strategies, e.g. bias correction, random-forest approach, ensemble averaging methods, as well as one dynamic global vegetation model, LPJ-GUESS. The authors compare the different methods and report their effect on carbon cycle simulation of LPJ-GUESS in space and time.
The analyzes presented by Treckentrup et al. are very interesting, comprehensive and useful in understanding the impact of different meteorological forcing on the carbon cycle. In some places, the manuscript seems a bit overloaded, making it somewhat more difficult to grasp the full scope of the analyses. Overall, the manuscript is well written. I have a few general comments and a short list of specific comments. Thus, I recommend minor revisions before publication.
General comments:
- I like how the title reads and the scope of the study, but I find it a bit misleading. As far as I can judge, you are not looking into the projection of the carbon cycle, right? Projection, by definition, means simulating a potential future evolution of the system (e.g. boundary conditions are scenario-driven). Your analysis is based on historical simulations, where we have access to the boundary conditions, e.g. greenhouse gases, volcanic/anthropogenic aerosol loading, etc. So, I would use the word "regional simulations". Then I do not really see how you constrain the regional carbon cycle uncertainty. You show that one land model simulates different CC response dependent on different meteorological inputs. In your previous paper, you showed the uncertainty that is related to the variety of land models. So, I would say that you comprehensively demonstrate the entire uncertainty in simulating the carbon cycle related to the choice of models and choice of forcing, but I don’t see really how you would go about in constraining this uncertainty. The proposed bias correction methods etc. do not really contribute to reduce the uncertainty, since, if we now ran all TRENDY models with your reanalysis-corrected / or “ensemble average weighting” meteorological forcing, we would end up with a similar uncertainty in the CC response. Bottomline is, maybe you should focus more on the “full uncertainty” aspect in communicating your analysis, than the “constraining” aspect.
- Overall, I am very surprised that the effect of CO2 on plants, e.g. on water-use efficiency, or the direct stimulation of carbon assimilation, is not being discussed nor mentioned here at all. These effects are vital in simulating the carbon cycle under rising CO2. Were these effects accounted for in the LPJ-GUESS setup? I think so, since almost all runs show an increase in C_total, even those which received a decrease in precipitation and increase in temperature as forcing. How would Australian ecosystems accumulate more carbon under these circumstances? To estimate the impact of meteorological forcing, the CO2 effects might not be essential, but still, these effects need to be addressed and communicated.
- I am hesitant to suggest more analysis, since this manuscript already contains a lot of analysis and is a bit over-loaded. So, it is difficult to grasp the entire scope of the manuscript. Are that many supplementary figures needed? I would suggest to assess whether one could reduce some parts in the manuscript, so that it becomes better accessible to the reader and the key messages come across.
- But I have to suggest at least one additional analysis point: You only use one realization (r1i1p1f1) of each model. To really get an idea of how the specific GCM compares to reanalysis and other GCMs, one should analyze as many realizations as possible. I would even suggest to get meteorological forcing from grand / large ensembles and one can identify real biases in the model. One realization is not representative for the model, except when some data-assimilation / nudging is conducted (e.g. as in reanalysis). I know it would be too much work for this study, but one should think about it.
Specific Comments:
L18: What does "and above" mean here? and above global scale?
LL85-89: Rather long sentence containing many aspects - can you split it up in at least two separate sentences?
LL92-94: I don't understand the logic of this sentence. TRENDY models use the identical meteorological forcing and show a large difference in the response of the carbon cycle to the forcing. So, this calls for reducing uncertainty in the land-surface model predictions, rather than the meteorological forcing, no?
LL97-98: Can you provide more detail on what first generation and second generation DGVMs refer to?
L103: What simulation? Please be more specific. It is probably the “historical” simulation, but there are others, like esmHist, where the carbon cycle is fully coupled, etc.
L104: What about the information about atmospheric humidity, i.e. VPD?
LL106-7: Can you really do that? Shouldn't you recycle all the inputs consistently then? You can have strong precipitation with simultaneous high shortwave radiation - what does LPJ-GUESS make out of these physically implausible inputs?
LL108-109: This means you are doing some heavy down-scaling the input variables to a quite high resolution in comparison to the native resolution of the GCMs. Maybe better to remap to a common 1x1 degree grid, no? Or maybe it'd be better to use downscaled CMIP6 output, e.g. https://eartharxiv.org/repository/view/2646/
L125: I think, that is not true. ERA5 is in 0.5x0.5 grid and there is a derivative that is at 0.25x0.25, but 0.05 seems extremely high resolution for reanalysis.
Figure 1: I think it would benefit the understanding of Figure 1, if you provided a slightly more elaborate figure caption. At least, you could specify the acronyms used in the figure, so the figure is readable without searching in the text for the acronym definitions.
L140: Can you provide more information on this estimator?
Table 2: The definition of the the summation notation would need more information to be mathematically correct, but I guess it is understandable as it is. https://en.wikipedia.org/wiki/Root-mean-square_deviation
L143: Well, these models historically evolved and they share code and concepts. It's hard to define which models are independent. Also, the models that are used to create the reanalysis e.g. IFS for ERA5 share code with CMIP6 models.
L147-148: Also, models that are highly dependent might not “correlate more” on monthly time-scale as the atmosphere is chaotic and highly dependent on the initial state etc.; I would assume that correlation of the spatial pattern in the climatological mean would provide more information. So, I think similar spatial bias matching would give you an idea whether models are similar or not, but maybe you do that, I did not fully understand.
L166: “Let us define” ?
LL170-173: Does this part connect to any paragraph?
L180: If you used temperature in Kelvin scale (so no negative values), one could only use this function for scaling consistently for all variables, no?
L191: “Let us denote” ?
L205: Not sure how this fits in the structure of the paragraph.
LL231-232: Then I really wonder why some representation of atmospheric humidity is not an input to LPJ-GUESS.
LL276-277: Can you explain why you include non-physical parameters such as longitude and latitude in the random-forest approach. Especially for a regional study, I would advise against this practice.
Figure 2: b,d,f are the same - but I saw the uploaded corrected figure.
LL305-onwards: Would it make sense to compare carbon fluxes from the actual CMIP6 models to get an estimate for carbon cycle uncertainty? Not all models (e.g. MPI−ESM1−2−HR), but most have some representation of the carbon cycle and the carbon fluxes? I also understand if you only wanted to focus on the effect of the selection of the meteorological forcing.
Figure 3: “PPT” is a rarely seen abbreviation for precipitation, better pr?
LL446-447: In the context of Australia, I would assume one can also add “improved prediction of fire risk”, as fire depends largely on the fuel load thus vegetation / carbon cycle.
LL589-590: Counter-argument: One should not only rely on using one DGVM for studies on ecosystem/carbon cycle impact. Maybe you can make the point, that we should use multiple DGVMs and multiple GCMs forcings.
Citation: https://doi.org/10.5194/egusphere-2022-623-RC2 - AC3: 'Reply on RC2', Lina Teckentrup, 10 Oct 2022
Interactive discussion
Status: closed
-
AC1: 'Comment on egusphere-2022-623', Lina Teckentrup, 25 Jul 2022
We unfortunately noticed we included a wrong version of figure 2 after submission. The analysis in the manuscript is based on the correct figure, which we attach in this comment. We are sorry for any inconveniences!
Lina Teckentrup (on behalf of the co-authors)
-
RC1: 'Comment on egusphere-2022-623', Anonymous Referee #1, 13 Sep 2022
Any bias in climate forcing directly influences the model projection of carbon cycle dynamics. Teckentrup et al. quantify the impacts of different bias correction (including univariate correction, multivariate correction, model averages, and random forest method) methods on improving the outputs of carbon stock changes from a dynamic global vegetation model (LPJ-GUESS). This draft was well-written, but I still have some comments on the algorithms used in this analysis, and I think the novelty is insufficient for a paper in ESD.
Major comments:
- The biggest concern is that after reading I still have no idea which bias correction method should be used to assess the spatial variability or short-term and long-term temporal variation in the total carbon stocks. The results are quite confusing. It would be good to evaluate the classifications of correction methods by function
- The authors should perform a synthetic analysis and evaluation. The current results are very preliminary.
Specific comments:
Fig 1: It would be good to differentiate steps and the name of methods in each step. Can use different icons or colors.
Table 2: Some of these selected metrics reflect the same (similar) property. For example, all the Root mean squared error, Normalised Mean Error, and Mean bias error indicates the bias in mean value. So the model with good skill in simulating mean value tends to have a higher rank. It is unfair.
Ln148: Why use the correlation of 0.3 as a threshold to select the models?
Ln305-314: Please clarify which meteorological forcing influence the mean value of C_total, the short-term variability (i.e., inter-annual variability) in C_total, and the long-term variability in C-total.
Ln320: In Fig3, only squares and circles indicate a larger bias in mean PPT after multivariate bias correction. Why?
Ln350: The authors should give a summarized metric showing which bias correction method is better. It is difficult to find the best model by eyes.
The spatial patterns of bias and CV of C_total simulated by the model in Fig 5 and Fig 6 have a clear and strange strip with extreme values. This is not reasonable. Could you please explain why this strip exists?
Ln374-375: It is not clear why C4 grasses would have a higher CV. The authors did not convince me that this is the real reason.
Ln379-380: The authors did not explain why the bias in C_total relates to foliar projective cover/ Could you please show the relation between C_total and foliar projective cover? Which factors or processes can influence foliar projective cover in the LPJ-GUESS model?
Ln415-417: The peak of seasonal GPP was underestimated a lot. Is this because the peak of meteorological variables (like precipitation or temperature) was underestimated and uncorrected?
Ln421-422: The bias in the dry season seems very small. So the effects of correcting data in the dry season may not be very useful?
Ln442-445 The introduction of the importance of Australia for the estimation of global land carbon sink should not be in the Discussion. Can put it into the Introduction.
Ln469-470: Don’t repeat the results of the analysis in the Discussion.
Citation: https://doi.org/10.5194/egusphere-2022-623-RC1 - AC2: 'Reply on RC1', Lina Teckentrup, 10 Oct 2022
-
RC2: 'Comment on egusphere-2022-623', Anonymous Referee #2, 22 Sep 2022
Review of egusphere-2022-623
Opening Pandora’s box: How to constrain regional projections of the carbon cycle
By Teckentrup et al.
In this study, the authors analyze the impact of varying meteorological forcing obtained from the historical CMIP6 GCMs / ESMs on the historical carbon cycle. More specifically, they asses the impact of the selection of the simulated meteorological forcing on the response of the Australian carbon cycle using different strategies, e.g. bias correction, random-forest approach, ensemble averaging methods, as well as one dynamic global vegetation model, LPJ-GUESS. The authors compare the different methods and report their effect on carbon cycle simulation of LPJ-GUESS in space and time.
The analyzes presented by Treckentrup et al. are very interesting, comprehensive and useful in understanding the impact of different meteorological forcing on the carbon cycle. In some places, the manuscript seems a bit overloaded, making it somewhat more difficult to grasp the full scope of the analyses. Overall, the manuscript is well written. I have a few general comments and a short list of specific comments. Thus, I recommend minor revisions before publication.
General comments:
- I like how the title reads and the scope of the study, but I find it a bit misleading. As far as I can judge, you are not looking into the projection of the carbon cycle, right? Projection, by definition, means simulating a potential future evolution of the system (e.g. boundary conditions are scenario-driven). Your analysis is based on historical simulations, where we have access to the boundary conditions, e.g. greenhouse gases, volcanic/anthropogenic aerosol loading, etc. So, I would use the word "regional simulations". Then I do not really see how you constrain the regional carbon cycle uncertainty. You show that one land model simulates different CC response dependent on different meteorological inputs. In your previous paper, you showed the uncertainty that is related to the variety of land models. So, I would say that you comprehensively demonstrate the entire uncertainty in simulating the carbon cycle related to the choice of models and choice of forcing, but I don’t see really how you would go about in constraining this uncertainty. The proposed bias correction methods etc. do not really contribute to reduce the uncertainty, since, if we now ran all TRENDY models with your reanalysis-corrected / or “ensemble average weighting” meteorological forcing, we would end up with a similar uncertainty in the CC response. Bottomline is, maybe you should focus more on the “full uncertainty” aspect in communicating your analysis, than the “constraining” aspect.
- Overall, I am very surprised that the effect of CO2 on plants, e.g. on water-use efficiency, or the direct stimulation of carbon assimilation, is not being discussed nor mentioned here at all. These effects are vital in simulating the carbon cycle under rising CO2. Were these effects accounted for in the LPJ-GUESS setup? I think so, since almost all runs show an increase in C_total, even those which received a decrease in precipitation and increase in temperature as forcing. How would Australian ecosystems accumulate more carbon under these circumstances? To estimate the impact of meteorological forcing, the CO2 effects might not be essential, but still, these effects need to be addressed and communicated.
- I am hesitant to suggest more analysis, since this manuscript already contains a lot of analysis and is a bit over-loaded. So, it is difficult to grasp the entire scope of the manuscript. Are that many supplementary figures needed? I would suggest to assess whether one could reduce some parts in the manuscript, so that it becomes better accessible to the reader and the key messages come across.
- But I have to suggest at least one additional analysis point: You only use one realization (r1i1p1f1) of each model. To really get an idea of how the specific GCM compares to reanalysis and other GCMs, one should analyze as many realizations as possible. I would even suggest to get meteorological forcing from grand / large ensembles and one can identify real biases in the model. One realization is not representative for the model, except when some data-assimilation / nudging is conducted (e.g. as in reanalysis). I know it would be too much work for this study, but one should think about it.
Specific Comments:
L18: What does "and above" mean here? and above global scale?
LL85-89: Rather long sentence containing many aspects - can you split it up in at least two separate sentences?
LL92-94: I don't understand the logic of this sentence. TRENDY models use the identical meteorological forcing and show a large difference in the response of the carbon cycle to the forcing. So, this calls for reducing uncertainty in the land-surface model predictions, rather than the meteorological forcing, no?
LL97-98: Can you provide more detail on what first generation and second generation DGVMs refer to?
L103: What simulation? Please be more specific. It is probably the “historical” simulation, but there are others, like esmHist, where the carbon cycle is fully coupled, etc.
L104: What about the information about atmospheric humidity, i.e. VPD?
LL106-7: Can you really do that? Shouldn't you recycle all the inputs consistently then? You can have strong precipitation with simultaneous high shortwave radiation - what does LPJ-GUESS make out of these physically implausible inputs?
LL108-109: This means you are doing some heavy down-scaling the input variables to a quite high resolution in comparison to the native resolution of the GCMs. Maybe better to remap to a common 1x1 degree grid, no? Or maybe it'd be better to use downscaled CMIP6 output, e.g. https://eartharxiv.org/repository/view/2646/
L125: I think, that is not true. ERA5 is in 0.5x0.5 grid and there is a derivative that is at 0.25x0.25, but 0.05 seems extremely high resolution for reanalysis.
Figure 1: I think it would benefit the understanding of Figure 1, if you provided a slightly more elaborate figure caption. At least, you could specify the acronyms used in the figure, so the figure is readable without searching in the text for the acronym definitions.
L140: Can you provide more information on this estimator?
Table 2: The definition of the the summation notation would need more information to be mathematically correct, but I guess it is understandable as it is. https://en.wikipedia.org/wiki/Root-mean-square_deviation
L143: Well, these models historically evolved and they share code and concepts. It's hard to define which models are independent. Also, the models that are used to create the reanalysis e.g. IFS for ERA5 share code with CMIP6 models.
L147-148: Also, models that are highly dependent might not “correlate more” on monthly time-scale as the atmosphere is chaotic and highly dependent on the initial state etc.; I would assume that correlation of the spatial pattern in the climatological mean would provide more information. So, I think similar spatial bias matching would give you an idea whether models are similar or not, but maybe you do that, I did not fully understand.
L166: “Let us define” ?
LL170-173: Does this part connect to any paragraph?
L180: If you used temperature in Kelvin scale (so no negative values), one could only use this function for scaling consistently for all variables, no?
L191: “Let us denote” ?
L205: Not sure how this fits in the structure of the paragraph.
LL231-232: Then I really wonder why some representation of atmospheric humidity is not an input to LPJ-GUESS.
LL276-277: Can you explain why you include non-physical parameters such as longitude and latitude in the random-forest approach. Especially for a regional study, I would advise against this practice.
Figure 2: b,d,f are the same - but I saw the uploaded corrected figure.
LL305-onwards: Would it make sense to compare carbon fluxes from the actual CMIP6 models to get an estimate for carbon cycle uncertainty? Not all models (e.g. MPI−ESM1−2−HR), but most have some representation of the carbon cycle and the carbon fluxes? I also understand if you only wanted to focus on the effect of the selection of the meteorological forcing.
Figure 3: “PPT” is a rarely seen abbreviation for precipitation, better pr?
LL446-447: In the context of Australia, I would assume one can also add “improved prediction of fire risk”, as fire depends largely on the fuel load thus vegetation / carbon cycle.
LL589-590: Counter-argument: One should not only rely on using one DGVM for studies on ecosystem/carbon cycle impact. Maybe you can make the point, that we should use multiple DGVMs and multiple GCMs forcings.
Citation: https://doi.org/10.5194/egusphere-2022-623-RC2 - AC3: 'Reply on RC2', Lina Teckentrup, 10 Oct 2022
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
434 | 175 | 19 | 628 | 8 | 6 |
- HTML: 434
- PDF: 175
- XML: 19
- Total: 628
- BibTeX: 8
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Martin Gerard De Kauwe
Gab Abramowitz
Andrew John Pitman
Anna Maria Ukkola
Sanaa Hobeichi
Bastien François
Benjamin Smith
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8361 KB) - Metadata XML