Complex Realities, Simple Signals? Global Evaluation of Early Warning Signals for Forest Mortality Events
Abstract. Forests around the world are increasingly experiencing large-scale regional mortality events as a result of droughts and heat waves. Despite their considerable impacts on the material, non-material, and regulatory contributions of forest ecosystems to people, these mortality events remain difficult to predict. Temporal Early Warning Signals (EWS) based on the concept of Critical Slowing Down (CSD) have been applied widely to remotely sensed vegetation indices. These EWS have often been interpreted as indicators of resilience loss. In order to be of practical use in real-world ecosystem management, such EWS must demonstrate the capacity to reliably and robustly predict upcoming forest mortality events. Previous work has applied EWS for local cases of mortality, but to date, there is no global assessment of EWS on remotely sensed vegetation indices of forest mortality events. The objective of this study is threefold: 1) to provide an overview of various types of EWS as applied to forest mortality events in case studies around the world, 2) to empirically assess the effectiveness of different EWS in predicting globally distributed forest mortality events driven by droughts and heat waves using remote sensing time series, and 3) to conduct a driver analysis to evaluate which factors associated with the methodological setup, the characteristics of the mortality event and climatic conditions explain the performance of different EWS. We find that most previous work in predicting forest mortality events is based on tree ring data. In remote sensing applications, there is a significant lack of robust evaluation of CSD-based EWS using control cases. Our empirical analysis indicates that all of the EWS that were evaluated in this study are ineffective and lack the necessary robustness to serve as predictors of drought-induced forest mortality events. The primary factor that determines trends in EWS is the methodological setup employed. We conclude by calling for more caution in the application of system-agnostic CSD-based EWS, increased efforts to assess accuracy and uncertainty, and more consideration of the system characteristics and actual needs of ecosystem managers when assessing forest resilience and early warning systems.
Competing interests: One of the co-authors is an editor on the special issue.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.
Summary
This study investigates the prediction power of CSD-based Early Warning Signals (EWS) to predict forest dieback events. To this end, the authors first conduct a literature review, during which they find that most of such studies are performed on tree ring data, and that ~33% of the studies are CSD-based. Most studies validate their results by comparing EWS between mortality and control sites. The authors conduct a drivers analysis, training one boosted regression tree model per EWS per vegetation data set, with the goal of predicting the Kendall tau of the EWS in the data set from various variables, comprising choices in the EWS computation, characteristics of the mortality event, and climate characteristics. The data is chosen if there was an occurrence of a forest mortality event, and the authors ensure a balanced training data set by including a non-dieback case for each dieback case, where the first is chosen to lie in the vicinity of the second. The authors claim that “if the EWS work to robustly predict mortality events, we would expect characteristics of the event, i.e., the resistance and reccovery rate of the die-back, to be the key predictors of Kendall tau trends in all eWS, jointly with climate information”. However, they find poor predictability. They find that rather, the most important features to predict the EWS are 1) timeframe, 2) window length, and less consistent, 3) resistance, 4) recovery rate (for most EWS-data combinations). They conclude that EWS based on CSD are thus ineffective and non-robust predictors of forest mortality events.
General comments
I agree that it would be great if we could predict forest mortality events. However, I wonder if the term Early Warning Sign (EWS) was optimally chosen when the scientific community agreed on using it for Critical-Slowing-Down (CSD) indicators. These indicators are – at best – statistical proxies of the recovery rate of the system, which decreases as the system approaches a fold bifurcation type tipping point. By design, they cannot warn of a tipping by themselves, and could probably be used as “EWS” only in combination with some kind of threshold. However, I am not aware of that and doubt it would be robust, as probably do Scheffer et al. (2009) as they write “We note also that most of the signals we have discussed should still be interpreted in a relative sense. For instance, although autocorrelation is predicted to approach unity at a fold bifurcation, measurement noise will tend to reduce correlations.” Hence, I also doubt that Scheffer et al. (2009) claim that “EWS based on the concept of alternative stable states and the theory of CSD have been proposed as a promising alternative approach to predict forest mortality events” (l 48f). Instead, often the change of the proxies is assessed, but all that these changes can tell us is IF the system is loosing stability. I thus believe that even though termed EWS, it is not the CSD-indicators fault that they cannot predict forest mortality, but rather the unfortunate wording that lead to overoptimistic applications in a way the indicators weren’t meant to be applied.
However, I agree that the suitability of the CSD based indicators to inform us about tipping should be investigated and I believe that, after consideration of the theory’s possibilities and adapting the study to what can be expected, this study could contribute valuable insights. Namely, I believe the study could reflect some misunderstanding of the CSD-based theory, which I will try to outline
If the system was before in the system state associated with the now disappeared stable equilibrium, it would of course get attracted to the other, now only, stable equilibrium, resembling a qualitative shift in the system state. However, this is not the only way such a qualitative shift could happen. Depending on the distance of the unstable equilibrium to the stable equilibrium the system is currently in, more or less external energy (perturbations) is needed to “kick the system over to the other stable equilibrium”.
In summary, I believe that the conclusion “that despite their extensive utilization and ease of application, system-agnostic Early Warning Signals based on Critical Slowing Down are ineffective and non-robust predictors of forest mortality events on a global scale” cannot be “attributed to the poor approximation of ecosystem dynamics from optical vegetation indices and the multivariate character of real ecosystems” but rather that the theory was applied beyond the conceptual scope it can reliably represent. Which is, of course, an important notion to make.
Specific comments
This makes me wonder if I understood your approach correctly. How do you get a trend from 1 year only, if your minimum window size for calculating the EWS was 1 year?