the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Process-oriented models of autumn leaf phenology: ways to sound calibration and implications of uncertain projections
Abstract. Autumn leaf phenology marks the end of the growing season, during which trees assimilate atmospheric CO2. Since autumn leaf phenology responds to climatic conditions, climate change affects the length of the growing season. Thus, autumn leaf phenology is often modelled to assess possible climate change effects on future CO2 mitigating capacities and species compositions of forests. Projected trends have been mainly discussed with regards to model performance and climate change scenarios. However, there has been no systematic and thorough evaluation of how performance and projections are affected by the calibration approach. Here, we analyzed >2.3 million performances and 39 million projections across 21 models, 5 optimization algorithms, ≥7 sampling procedures, and 26 climate model chains from two representative concentration pathways. Calibration and validation were based on >45 000 observations for beech, oak, and larch from 500 Central European sites each.
Phenology models had the largest influence on model performance. The best performing models were (1) driven by daily temperature, day length, and partly by seasonal temperature or spring leaf phenology and (2) calibrated with the Generalized Simulated Annealing algorithm (3) based on systematically balanced or stratified samples. Assuming an advancing spring phenology, projected autumn phenology shifts between 13 and +20 days by 2080–2099, resulting in a lengthening of the growing season by 7–40 days. Climate scenarios and sites explained more than 80 % of the variance in these shifts and thus had eight to 22 times the influence of phenology models. Warmer climate scenarios and better performing models predominantly extended the growing season more than cooler scenarios and poorer models.
Our results justify inferences from comparisons of process-oriented phenology models to phenology-driving processes and we advocate species-specific models for such analyses and subsequent projections. For sound calibration, we recommend a combination of cross-validations and independent tests, using randomly selected sites from stratified bins based on mean annual temperature and average autumn phenology, respectively. Poor performance and little influence of phenology models on autumn phenology projections suggest that the models are overlooking relevant drivers. While the uncertain projections indicate an extension of the growing season, further studies are needed to develop models that adequately consider the relevant processes for autumn phenology.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(2121 KB)
-
Supplement
(11521 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2121 KB) - Metadata XML
-
Supplement
(11521 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-1423', Anonymous Referee #1, 13 Apr 2023
General comments:
This manuscript reports a very comprehensive assesment of current process-oriented models of leaf senescence in temperate deciduous trees. It considers aspects that are rarely addressed in phenological modelling, and often overlooked or reported quite superficially in other manuscripts, despite their potential strong influence on the interpretation of the research. The considered aspects are related to the model calibration and evaluation (namely the scale of calibration site- vs. species-scale, the choice and parameterization of optimization algorithms, the cal/val sampling strategy), and their effect on model projections.
In essence, this manuscript has the potential to become a vademecum for phenological modellers, and possibly beyond this community (i'm thinking here of modellers working with models simple/fast enough to allow large numbers of computations), providing an example on how to rigorously design cal/val and projection studies. The downside is that the manuscript is very long, and sometimes difficult to follow due to the comprehensiveness of the tests performed and the results reported. This is not prohibitive to me, and I would like to read more often phenological modelling studies conducted in such a rigorous way. Hence I do not ask for a general reduction of the manuscript length. However, I strongly recommend the authors to provide a section (such sections are called "boxes" in some journals) highlighting what they identified from their work as good practices for phenological modelling.
This section would ideally list items regarding to the aspects they deal with (e.g. "how to rigorously sample a phenological database for cal/val of a phenological model", "which optimization algorithm to choose" etc.) and giving practical numbers / orders of magnitude / rules of thumbs useful to other modellers. This may require including in this "box" some definitions located here and elsewhere in the manuscript, possibly with examples (e.g. what is a "stratified" sampling etc.). I think most of this is already present in the manuscript, but it is dispersed and quite difficult to find. To put it more bluntly, currently the manuscript has a strong potential but no strong take-home message, and leaves the reader wrung out after an avalanche of valuable informations.In other words, I would like this manuscript to offer two levels of reading: the very detailed one that is currenctly presented. And another, more synthetic one, which would help spread good practices in phenological modelling.
Specific comments:
L25-26: should this sentence be understood "in general" for all leaf senescence models?
L50: In French, it is customary to call this scientist "Réaumur" (not "De Réaumur")
L63: meaning of "was not given" ? Clarify
L88-89: about space-for-time approach, the paper by Jochner et al. is interesting, and not appearing in the reference list: Susanne Jochner, Amelia Caffarra, Annette Menzel, Can spatial data substitute temporal data in phenological modelling? A survey using birch flowering, Tree Physiology, Volume 33, Issue 12, December 2013, Pages 1256–1268, https://doi.org/10.1093/treephys/tpt079
L113 "in contrast": I do not understand the logical link with preceding sentence here
L120-121: site-specific vs. species-specific calibration: behind this is the question of tree populations local adaptations, that is not considered here. It was in Delpierre et al. 2009 (and Chuine et al. 2000). Mention it somewhere.
L153: the assertion "the proper order (e.g. the date for leaf coloration was before the date for leaf fall)" is wrong: leaf fall can occur before leaf coloration. Or at least, part of the leaf fall can occur before reaching BBCH94 (= 40% of leaves colored or fallen) considered in this paper (L159). Which BBCH code did you consider for leaf fall?
L154: "After corresponding correction..." is unclear. Rephrase.
L174: why using Tmin as a "general" driver? Models often use the daily average temperature.
L178-179: 0.25° is a quite coarse spatial resolution, notably when it comes to mountainous areas. Any correction of temperature with altitude (through lapse rate)?
L183: what is a "climate model chain"? Is one CMC corresponding to one particular climate model run under a particular RCP?
Table 1: From Suppl Mat 2, it is unclear how you implemented the relatively complex responses to GSI and Anet described in Ziani et al. 2020. Did you actually code those responses? What cast doubts to me is your use of the term "apparent photosynthesis" in Suppl Mat 2, though it seems from Suppl Mat 1 that you indeed computed photosynthesis. This should be clarified in Suppl Mat 2, e.g. with a mention to Ziani et al. 2020 (their suppl. mat.).
L248: were the models tested in their ability to simulate trends in observed data (if any over 20-65 years?)
L259-260: Does this mean that the parameters used for model evaluation and model projections were possibly different? Why so?
L440-441: I'm unsure we are here talking about model external validation. If this is the case, it is not particularly surprising that site-specific calibration can sometimes yield, at the very same site, unrealistic results when the model is used to predict unknown data. Indeed, site-specific parametrisation are more prone to over-fitting (few data points over which to fit the model), as compared to species-specific parametrisation (which includes many sites).
Fig. 3b: inversion of the x-axis is confusing. It took me several minutes to understand this subplot (as compared to the text description), because i intuitively tried to interpret the x-axis with negative values on the left of the vertical dashed line.
L531: how possible is this? Considering that NA-producing and non-converging runs are assigned high RMSE values.
L534-548: a question relative to optimization algorithm is: is the identity of the algorithm involved, or is it more the design of the simulation: in other words we see in SM4, Table S1 that the "normal" vs. "extended" runs of the calibration procedures include less vs. more iterations. I do not see an analysis considering the influence of this number of iterations per algorithm. Apparently, the influence of iteration number is small (Fig 3b: points are close whether considering "norm" or "extd" for one algorithm). If the "extd" simulation number was extended further, would that modify the results?
Fig 4b: if CMC numbers point to particular climate models, one sees the effect of the climate model can be huge ! Were these climate models unbiased (against local observed climate data)? Climate model bias can influence process-based model simulations strongly, see e.g. Jourdan et al. 2021
L621-623: I do not see this on Fig. 4b (i.e. dot coefficients of DM2 and DM2Za20 are not remarkable relative to other models)
L698: rewrite to "phenology"
L711-712: recall which models lead to the best results in Zani et al. 2020
L741 "other models": recall briefly their characteristics here
L750: rephrase to "local adaptation (Peaucelle et al. 2019)"
L752: "the less such consideration is possible" : uclear, rephrase.
L758-759: and/or that the observed data are more prone to observation bias, that can magnify if same observer is operating across years at a given site (in practice, observers inter-calib are rare). See Liu et al. 2021
L788-789: interesting result. Where does this "17 sites" come from? I do not remember seeing that earlier in the manuscript.
L819: Cochran (1946) is missing from the reference list
L852: remove "but see"
L889-891: we touch here the question of local adaptation again
L897: "Different models altered the reference shifts by -12 to +2 days", recall the average.
L970: "... and found our data to strongly encourage further research" is unclear.
References:
Chuine, I., Belmonte, J., Mignot, A., 2000. A modelling analysis of the genetic variation of phenology between tree populations. Journal of Ecology 88 (4), 561–570.
Delpierre, N., Dufrêne, E., Soudani, K., Ulrich, E., Cecchini, S., Boé, J., & François, C. (2009). Modelling interannual and spatial variability of leaf senescence for three deciduous tree species in France. agricultural and forest meteorology, 149(6-7), 938-948.
Susanne Jochner, Amelia Caffarra, Annette Menzel, Can spatial data substitute temporal data in phenological modelling? A survey using birch flowering, Tree Physiology, Volume 33, Issue 12, December 2013, Pages 1256–1268, https://doi.org/10.1093/treephys/tpt079
Jourdan, M., François, C., Delpierre, N., St-Paul, N. M., & Dufrêne, E. (2021). Reliable predictions of forest ecosystem functioning require flawless climate forcings. Agricultural and Forest Meteorology, 311, 108703.
Liu, G., Chuine, I., Denéchère, R., Jean, F., Dufrêne, E., Vincent, G., ... & Delpierre, N. (2021). Higher sample sizes and observer inter‐calibration are needed for reliable scoring of leaf phenology in trees. Journal of Ecology, 109(6), 2461-2474.
Citation: https://doi.org/10.5194/egusphere-2022-1423-RC1 -
AC1: 'Reply on RC1', Michael Meier, 12 Jul 2023
Thank you for your kind summary and general comments. We are pleased to hear that you see the potential of our manuscript for a vademecum. We have considered your suggestion to add a section/box that summarizes our study and have responded to your specific comments in the attached PDF.
-
AC1: 'Reply on RC1', Michael Meier, 12 Jul 2023
-
RC2: 'Comment on egusphere-2022-1423', Anonymous Referee #2, 07 Jun 2023
Autumn leaf phenology impacts the biochemical and biophysical feedback of forests to climate. Modelling and projecting autumn leaf phenology of deciduous trees is therefore important and timely. Several studies have proposed and compared various modelling approaches. This study is different in the way that does not focus on a new modelling approach or only comparing existing approaches, but integrate model comparison with an analyze of the impact of different calibration procedures (e.g. site vs species), optimization, data sampling procedure etc considering their impact on model performance and model projections. For the latter aspects, analyses of the different scenarios is also considered. I find the study important and well done. The manuscript is also easy to read and very nicely synthesizes an huge amount of data. Practical useful recommendation are made in conclusions. I have however, some suggestions for improvement.
1.while the text is clear, a scheme of the Methodology, thus a schematic synthesis of the different analyses performed, performance indicators used, etc, would be useful.
2.I realize the analysis of the different formulation of the models considered is not the main focus of the study; yet, the different models are discussed and they will sure attract interest. So, I would add in Methods (not only in supplementary) a paragraph with a general description of the different type of model used (e.g. only driven by current temperature and photoperiod, or modulated by summer conditions, or by budburst timing), their key drivers etc. In practice, a description of Table 1.
3.in Abstract and the entire text, I would not stress too much the modelled data of growing season length, rather focus on the date of autumn phenology. In fact, the data on growing season length are crucially affected by the spring phenology, which was only very coarsely estimated here.
4.the authors does not consider in fully another source of uncertainty, which is the quality of the observational data, comprising past climate data. For example, is the biases associated with considering climate at 25 km resolution negligible? (L79) I’m worried particularly for larix sites, which are often found on mountain regions. Similarly: what about the spatial match between LAI and soil water characteristics used when compared to data on phenology from PEP? Could large biases (at site level) be introduced?
5.autumn leaf phenology is actually made up by several phenological events (e.g. onset of chlorophyll degradation, 50% leaf coloration, leaf fall), with timing varying of several weeks (e.g. Marien et al 2019 New Phytologist, doi: 10.1111/nph.15991); are the models simulating the same exact event? (which one?)
L164: to my knowledge, beech does not growth at site with MAT below 6-7 degree C. A beech site at 0.6 degree MAT (subarctic conditions) is quite unrealistic.
L843-845: the explanation based on severity of extreme is questionable; see Marien et al 2021 Biogeosciences (doi.org/10.5194/bg-18-3309-2021), and for a more fundamental impact of drought on autumn phenology see Marchin et al 2010 Oecologie (DOI 10.1007/s00442-010-1614-4).
L906-907: “… all analyzed models are based on the same process …”. I do not agree: models based on current autumn conditions (temperature and daylength) are different than models considering also the impact of, for example, summer (e.g. implying legacy of tree growth on senescence) or budburst (e.g. implying constraint on leaf longevity).
Citation: https://doi.org/10.5194/egusphere-2022-1423-RC2 - AC2: 'Reply on RC2', Michael Meier, 12 Jul 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-1423', Anonymous Referee #1, 13 Apr 2023
General comments:
This manuscript reports a very comprehensive assesment of current process-oriented models of leaf senescence in temperate deciduous trees. It considers aspects that are rarely addressed in phenological modelling, and often overlooked or reported quite superficially in other manuscripts, despite their potential strong influence on the interpretation of the research. The considered aspects are related to the model calibration and evaluation (namely the scale of calibration site- vs. species-scale, the choice and parameterization of optimization algorithms, the cal/val sampling strategy), and their effect on model projections.
In essence, this manuscript has the potential to become a vademecum for phenological modellers, and possibly beyond this community (i'm thinking here of modellers working with models simple/fast enough to allow large numbers of computations), providing an example on how to rigorously design cal/val and projection studies. The downside is that the manuscript is very long, and sometimes difficult to follow due to the comprehensiveness of the tests performed and the results reported. This is not prohibitive to me, and I would like to read more often phenological modelling studies conducted in such a rigorous way. Hence I do not ask for a general reduction of the manuscript length. However, I strongly recommend the authors to provide a section (such sections are called "boxes" in some journals) highlighting what they identified from their work as good practices for phenological modelling.
This section would ideally list items regarding to the aspects they deal with (e.g. "how to rigorously sample a phenological database for cal/val of a phenological model", "which optimization algorithm to choose" etc.) and giving practical numbers / orders of magnitude / rules of thumbs useful to other modellers. This may require including in this "box" some definitions located here and elsewhere in the manuscript, possibly with examples (e.g. what is a "stratified" sampling etc.). I think most of this is already present in the manuscript, but it is dispersed and quite difficult to find. To put it more bluntly, currently the manuscript has a strong potential but no strong take-home message, and leaves the reader wrung out after an avalanche of valuable informations.In other words, I would like this manuscript to offer two levels of reading: the very detailed one that is currenctly presented. And another, more synthetic one, which would help spread good practices in phenological modelling.
Specific comments:
L25-26: should this sentence be understood "in general" for all leaf senescence models?
L50: In French, it is customary to call this scientist "Réaumur" (not "De Réaumur")
L63: meaning of "was not given" ? Clarify
L88-89: about space-for-time approach, the paper by Jochner et al. is interesting, and not appearing in the reference list: Susanne Jochner, Amelia Caffarra, Annette Menzel, Can spatial data substitute temporal data in phenological modelling? A survey using birch flowering, Tree Physiology, Volume 33, Issue 12, December 2013, Pages 1256–1268, https://doi.org/10.1093/treephys/tpt079
L113 "in contrast": I do not understand the logical link with preceding sentence here
L120-121: site-specific vs. species-specific calibration: behind this is the question of tree populations local adaptations, that is not considered here. It was in Delpierre et al. 2009 (and Chuine et al. 2000). Mention it somewhere.
L153: the assertion "the proper order (e.g. the date for leaf coloration was before the date for leaf fall)" is wrong: leaf fall can occur before leaf coloration. Or at least, part of the leaf fall can occur before reaching BBCH94 (= 40% of leaves colored or fallen) considered in this paper (L159). Which BBCH code did you consider for leaf fall?
L154: "After corresponding correction..." is unclear. Rephrase.
L174: why using Tmin as a "general" driver? Models often use the daily average temperature.
L178-179: 0.25° is a quite coarse spatial resolution, notably when it comes to mountainous areas. Any correction of temperature with altitude (through lapse rate)?
L183: what is a "climate model chain"? Is one CMC corresponding to one particular climate model run under a particular RCP?
Table 1: From Suppl Mat 2, it is unclear how you implemented the relatively complex responses to GSI and Anet described in Ziani et al. 2020. Did you actually code those responses? What cast doubts to me is your use of the term "apparent photosynthesis" in Suppl Mat 2, though it seems from Suppl Mat 1 that you indeed computed photosynthesis. This should be clarified in Suppl Mat 2, e.g. with a mention to Ziani et al. 2020 (their suppl. mat.).
L248: were the models tested in their ability to simulate trends in observed data (if any over 20-65 years?)
L259-260: Does this mean that the parameters used for model evaluation and model projections were possibly different? Why so?
L440-441: I'm unsure we are here talking about model external validation. If this is the case, it is not particularly surprising that site-specific calibration can sometimes yield, at the very same site, unrealistic results when the model is used to predict unknown data. Indeed, site-specific parametrisation are more prone to over-fitting (few data points over which to fit the model), as compared to species-specific parametrisation (which includes many sites).
Fig. 3b: inversion of the x-axis is confusing. It took me several minutes to understand this subplot (as compared to the text description), because i intuitively tried to interpret the x-axis with negative values on the left of the vertical dashed line.
L531: how possible is this? Considering that NA-producing and non-converging runs are assigned high RMSE values.
L534-548: a question relative to optimization algorithm is: is the identity of the algorithm involved, or is it more the design of the simulation: in other words we see in SM4, Table S1 that the "normal" vs. "extended" runs of the calibration procedures include less vs. more iterations. I do not see an analysis considering the influence of this number of iterations per algorithm. Apparently, the influence of iteration number is small (Fig 3b: points are close whether considering "norm" or "extd" for one algorithm). If the "extd" simulation number was extended further, would that modify the results?
Fig 4b: if CMC numbers point to particular climate models, one sees the effect of the climate model can be huge ! Were these climate models unbiased (against local observed climate data)? Climate model bias can influence process-based model simulations strongly, see e.g. Jourdan et al. 2021
L621-623: I do not see this on Fig. 4b (i.e. dot coefficients of DM2 and DM2Za20 are not remarkable relative to other models)
L698: rewrite to "phenology"
L711-712: recall which models lead to the best results in Zani et al. 2020
L741 "other models": recall briefly their characteristics here
L750: rephrase to "local adaptation (Peaucelle et al. 2019)"
L752: "the less such consideration is possible" : uclear, rephrase.
L758-759: and/or that the observed data are more prone to observation bias, that can magnify if same observer is operating across years at a given site (in practice, observers inter-calib are rare). See Liu et al. 2021
L788-789: interesting result. Where does this "17 sites" come from? I do not remember seeing that earlier in the manuscript.
L819: Cochran (1946) is missing from the reference list
L852: remove "but see"
L889-891: we touch here the question of local adaptation again
L897: "Different models altered the reference shifts by -12 to +2 days", recall the average.
L970: "... and found our data to strongly encourage further research" is unclear.
References:
Chuine, I., Belmonte, J., Mignot, A., 2000. A modelling analysis of the genetic variation of phenology between tree populations. Journal of Ecology 88 (4), 561–570.
Delpierre, N., Dufrêne, E., Soudani, K., Ulrich, E., Cecchini, S., Boé, J., & François, C. (2009). Modelling interannual and spatial variability of leaf senescence for three deciduous tree species in France. agricultural and forest meteorology, 149(6-7), 938-948.
Susanne Jochner, Amelia Caffarra, Annette Menzel, Can spatial data substitute temporal data in phenological modelling? A survey using birch flowering, Tree Physiology, Volume 33, Issue 12, December 2013, Pages 1256–1268, https://doi.org/10.1093/treephys/tpt079
Jourdan, M., François, C., Delpierre, N., St-Paul, N. M., & Dufrêne, E. (2021). Reliable predictions of forest ecosystem functioning require flawless climate forcings. Agricultural and Forest Meteorology, 311, 108703.
Liu, G., Chuine, I., Denéchère, R., Jean, F., Dufrêne, E., Vincent, G., ... & Delpierre, N. (2021). Higher sample sizes and observer inter‐calibration are needed for reliable scoring of leaf phenology in trees. Journal of Ecology, 109(6), 2461-2474.
Citation: https://doi.org/10.5194/egusphere-2022-1423-RC1 -
AC1: 'Reply on RC1', Michael Meier, 12 Jul 2023
Thank you for your kind summary and general comments. We are pleased to hear that you see the potential of our manuscript for a vademecum. We have considered your suggestion to add a section/box that summarizes our study and have responded to your specific comments in the attached PDF.
-
AC1: 'Reply on RC1', Michael Meier, 12 Jul 2023
-
RC2: 'Comment on egusphere-2022-1423', Anonymous Referee #2, 07 Jun 2023
Autumn leaf phenology impacts the biochemical and biophysical feedback of forests to climate. Modelling and projecting autumn leaf phenology of deciduous trees is therefore important and timely. Several studies have proposed and compared various modelling approaches. This study is different in the way that does not focus on a new modelling approach or only comparing existing approaches, but integrate model comparison with an analyze of the impact of different calibration procedures (e.g. site vs species), optimization, data sampling procedure etc considering their impact on model performance and model projections. For the latter aspects, analyses of the different scenarios is also considered. I find the study important and well done. The manuscript is also easy to read and very nicely synthesizes an huge amount of data. Practical useful recommendation are made in conclusions. I have however, some suggestions for improvement.
1.while the text is clear, a scheme of the Methodology, thus a schematic synthesis of the different analyses performed, performance indicators used, etc, would be useful.
2.I realize the analysis of the different formulation of the models considered is not the main focus of the study; yet, the different models are discussed and they will sure attract interest. So, I would add in Methods (not only in supplementary) a paragraph with a general description of the different type of model used (e.g. only driven by current temperature and photoperiod, or modulated by summer conditions, or by budburst timing), their key drivers etc. In practice, a description of Table 1.
3.in Abstract and the entire text, I would not stress too much the modelled data of growing season length, rather focus on the date of autumn phenology. In fact, the data on growing season length are crucially affected by the spring phenology, which was only very coarsely estimated here.
4.the authors does not consider in fully another source of uncertainty, which is the quality of the observational data, comprising past climate data. For example, is the biases associated with considering climate at 25 km resolution negligible? (L79) I’m worried particularly for larix sites, which are often found on mountain regions. Similarly: what about the spatial match between LAI and soil water characteristics used when compared to data on phenology from PEP? Could large biases (at site level) be introduced?
5.autumn leaf phenology is actually made up by several phenological events (e.g. onset of chlorophyll degradation, 50% leaf coloration, leaf fall), with timing varying of several weeks (e.g. Marien et al 2019 New Phytologist, doi: 10.1111/nph.15991); are the models simulating the same exact event? (which one?)
L164: to my knowledge, beech does not growth at site with MAT below 6-7 degree C. A beech site at 0.6 degree MAT (subarctic conditions) is quite unrealistic.
L843-845: the explanation based on severity of extreme is questionable; see Marien et al 2021 Biogeosciences (doi.org/10.5194/bg-18-3309-2021), and for a more fundamental impact of drought on autumn phenology see Marchin et al 2010 Oecologie (DOI 10.1007/s00442-010-1614-4).
L906-907: “… all analyzed models are based on the same process …”. I do not agree: models based on current autumn conditions (temperature and daylength) are different than models considering also the impact of, for example, summer (e.g. implying legacy of tree growth on senescence) or budburst (e.g. implying constraint on leaf longevity).
Citation: https://doi.org/10.5194/egusphere-2022-1423-RC2 - AC2: 'Reply on RC2', Michael Meier, 12 Jul 2023
Peer review completion
Post-review adjustments
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
580 | 131 | 24 | 735 | 46 | 15 | 13 |
- HTML: 580
- PDF: 131
- XML: 24
- Total: 735
- Supplement: 46
- BibTeX: 15
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Christof Bigler
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2121 KB) - Metadata XML
-
Supplement
(11521 KB) - BibTeX
- EndNote
- Final revised paper