Preprints
https://doi.org/10.5194/egusphere-2023-2969
https://doi.org/10.5194/egusphere-2023-2969
30 Jan 2024
 | 30 Jan 2024

Moving beyond post-hoc XAI: Lessons learned from dynamical climate modeling

Ryan O'Loughlin, Dan Li, and Travis O'Brien

Abstract. AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this Perspective, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can be attributed to specific components or sub-models, such as when model bias is explained by pointing to a particular parameterization. We propose three types of understanding as a basis to evaluate trust in dynamical and AI models alike: (1) instrumental understanding, which is obtained when a model has passed a functional test; (2) statistical understanding, which is obtained when researchers can make sense of the modelling results using statistical techniques to identify input-output relationships; and (3) Component-level understanding, which refers to modelers’ ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well. We demonstrate how component-level understanding has been sought and achieved via climate model intercomparison projects over the past several decades. Such component-level of understanding routinely leads to model improvements and may also serve as a template for thinking about AI-driven climate science. Currently, XAI methods can help explain the behaviors of AI models by focusing on the mapping between input and output, thereby increasing the statistical understanding of AI models. Yet, to further increase our understanding of AI models, we will have to build AI models that have interpretable components amenable to component-level understanding. We give recent examples from the AI climate science literature to highlight some recent, albeit limited, successes in achieving component-level understanding and thereby explaining model behaviour. The merit of such interpretable AI models is that they serve as a stronger basis for trust in climate modeling and, by extension, downstream uses of climate model data.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this preprint. The responsibility to include appropriate place names lies with the authors.
Ryan O'Loughlin, Dan Li, and Travis O'Brien

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Cross-validation, Symbolic Regression, Pareto include', Paul PUKITE, 16 Feb 2024
  • RC1: 'Comment on egusphere-2023-2969', Julie Jebeile, 06 Jun 2024
  • RC2: 'Comment on egusphere-2023-2969', Imme Ebert-Uphoff, 12 Jun 2024
  • RC3: 'Comment on egusphere-2023-2969', Yumin Liu, 29 Jun 2024
  • AC1: 'Comment on egusphere-2023-2969', Ryan O'Loughlin, 07 Sep 2024

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Cross-validation, Symbolic Regression, Pareto include', Paul PUKITE, 16 Feb 2024
  • RC1: 'Comment on egusphere-2023-2969', Julie Jebeile, 06 Jun 2024
  • RC2: 'Comment on egusphere-2023-2969', Imme Ebert-Uphoff, 12 Jun 2024
  • RC3: 'Comment on egusphere-2023-2969', Yumin Liu, 29 Jun 2024
  • AC1: 'Comment on egusphere-2023-2969', Ryan O'Loughlin, 07 Sep 2024
Ryan O'Loughlin, Dan Li, and Travis O'Brien
Ryan O'Loughlin, Dan Li, and Travis O'Brien

Viewed

Total article views: 831 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
614 164 53 831 33 24 22
  • HTML: 614
  • PDF: 164
  • XML: 53
  • Total: 831
  • Supplement: 33
  • BibTeX: 24
  • EndNote: 22
Views and downloads (calculated since 30 Jan 2024)
Cumulative views and downloads (calculated since 30 Jan 2024)

Viewed (geographical distribution)

Total article views: 830 (including HTML, PDF, and XML) Thereof 830 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 24 Dec 2024
Download
This perspective paper examines in detail the concept of explicability in a climate model, whether conventional physics-based dynamical models, or those incorporating components based on machine learning. Everyone with an interest in climate models or their outputs would benefit from understanding the processes by which we can understand the importance and accuracy of these models and the methods by which it is possible to make sense of those outputs. This paper is a major contribution to that understanding. It is also very well written and should be widely read in the field.
Short summary
We draw from traditional climate modeling practices to make recommendations for AI-driven climate science. In particular, we show how component-level understanding–which is obtained when scientists can link model behavior to parts within the overall model–should guide the development and evaluation of AI models. Better understanding can lead to a stronger basis for trust in these models. We highlight several examples to demonstrate.