Preprints
https://doi.org/10.5194/egusphere-2023-2969
https://doi.org/10.5194/egusphere-2023-2969
30 Jan 2024
 | 30 Jan 2024

Moving beyond post-hoc XAI: Lessons learned from dynamical climate modeling

Ryan O'Loughlin, Dan Li, and Travis O'Brien

Abstract. AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this Perspective, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can be attributed to specific components or sub-models, such as when model bias is explained by pointing to a particular parameterization. We propose three types of understanding as a basis to evaluate trust in dynamical and AI models alike: (1) instrumental understanding, which is obtained when a model has passed a functional test; (2) statistical understanding, which is obtained when researchers can make sense of the modelling results using statistical techniques to identify input-output relationships; and (3) Component-level understanding, which refers to modelers’ ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well. We demonstrate how component-level understanding has been sought and achieved via climate model intercomparison projects over the past several decades. Such component-level of understanding routinely leads to model improvements and may also serve as a template for thinking about AI-driven climate science. Currently, XAI methods can help explain the behaviors of AI models by focusing on the mapping between input and output, thereby increasing the statistical understanding of AI models. Yet, to further increase our understanding of AI models, we will have to build AI models that have interpretable components amenable to component-level understanding. We give recent examples from the AI climate science literature to highlight some recent, albeit limited, successes in achieving component-level understanding and thereby explaining model behaviour. The merit of such interpretable AI models is that they serve as a stronger basis for trust in climate modeling and, by extension, downstream uses of climate model data.

Competing interests: At least one of the (co-)authors is a member of the editorial board of Geoscientific Model Development. The peer-review process was guided by an independent editor, and the authors also have no other competing interests to declare.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.
Share

Journal article(s) based on this preprint

11 Feb 2025
| Review and perspective paper
| Highlight paper
Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien
Geosci. Model Dev., 18, 787–802, https://doi.org/10.5194/gmd-18-787-2025,https://doi.org/10.5194/gmd-18-787-2025, 2025
Short summary Executive editor
Ryan O'Loughlin, Dan Li, and Travis O'Brien

Interactive discussion

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Cross-validation, Symbolic Regression, Pareto include', Paul PUKITE, 16 Feb 2024
  • RC1: 'Comment on egusphere-2023-2969', Julie Jebeile, 06 Jun 2024
  • RC2: 'Comment on egusphere-2023-2969', Imme Ebert-Uphoff, 12 Jun 2024
  • RC3: 'Comment on egusphere-2023-2969', Yumin Liu, 29 Jun 2024
  • AC1: 'Comment on egusphere-2023-2969', Ryan O'Loughlin, 07 Sep 2024

Interactive discussion

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Cross-validation, Symbolic Regression, Pareto include', Paul PUKITE, 16 Feb 2024
  • RC1: 'Comment on egusphere-2023-2969', Julie Jebeile, 06 Jun 2024
  • RC2: 'Comment on egusphere-2023-2969', Imme Ebert-Uphoff, 12 Jun 2024
  • RC3: 'Comment on egusphere-2023-2969', Yumin Liu, 29 Jun 2024
  • AC1: 'Comment on egusphere-2023-2969', Ryan O'Loughlin, 07 Sep 2024

Peer review completion

AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Ryan O'Loughlin on behalf of the Authors (07 Sep 2024)  Author's response   Author's tracked changes 
EF by Anna Glados (16 Sep 2024)  Manuscript 
ED: Publish subject to minor revisions (review by editor) (31 Oct 2024) by Richard Mills
AR by Ryan O'Loughlin on behalf of the Authors (07 Nov 2024)  Author's response   Author's tracked changes   Manuscript 
ED: Publish as is (22 Nov 2024) by Richard Mills
ED: Publish as is (06 Dec 2024) by David Ham (Executive editor)
AR by Ryan O'Loughlin on behalf of the Authors (09 Dec 2024)

Journal article(s) based on this preprint

11 Feb 2025
| Review and perspective paper
| Highlight paper
Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
Ryan J. O'Loughlin, Dan Li, Richard Neale, and Travis A. O'Brien
Geosci. Model Dev., 18, 787–802, https://doi.org/10.5194/gmd-18-787-2025,https://doi.org/10.5194/gmd-18-787-2025, 2025
Short summary Executive editor
Ryan O'Loughlin, Dan Li, and Travis O'Brien
Ryan O'Loughlin, Dan Li, and Travis O'Brien

Viewed

Total article views: 891 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
660 177 54 891 34 26 24
  • HTML: 660
  • PDF: 177
  • XML: 54
  • Total: 891
  • Supplement: 34
  • BibTeX: 26
  • EndNote: 24
Views and downloads (calculated since 30 Jan 2024)
Cumulative views and downloads (calculated since 30 Jan 2024)

Viewed (geographical distribution)

Total article views: 889 (including HTML, PDF, and XML) Thereof 889 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 11 Mar 2026
Download

The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.

Short summary
We draw from traditional climate modeling practices to make recommendations for AI-driven climate science. In particular, we show how component-level understanding–which is obtained when scientists can link model behavior to parts within the overall model–should guide the development and evaluation of AI models. Better understanding can lead to a stronger basis for trust in these models. We highlight several examples to demonstrate.
Share