Preprints
https://doi.org/10.5194/egusphere-2026-2237
https://doi.org/10.5194/egusphere-2026-2237
27 Apr 2026
 | 27 Apr 2026
Status: this preprint is open for discussion and under review for Geoscientific Model Development (GMD).

Can We Trust LLMs for Complex Earth System Model Analysis? Silent Failure and Evidence from Module-Grounded Benchmarking

Tian Zhou, Yun Qian, and L. Ruby Leung

Abstract. Large language models (LLMs) are becoming increasingly capable of complex scientific scripting, but this growing robustness creates a paradox: the more trustworthy their outputs appear, the more easily scientifically incorrect results can pass unnoticed. In Earth system model (ESM) analysis, such silent failures are more dangerous than visible crashes because they produce plausible figures and statistics that may be accepted without detailed inspection. We address this risk with ESFlow, a module-grounded agentic AI framework that constrains the LLM to compose workflows from validated analysis tools rather than generate arbitrary code. The LLM reads an auto-generated, self-describing catalog and outputs a YAML (human-readable data-serialization) workflow, which is then executed by a deterministic engine. We demonstrate this framework with a validated tool library for Energy Exascale Earth System Model (E3SM) land surface hydrology diagnostics in a benchmark spanning seven analysis tasks and six contemporary LLMs. Across both single-attempt runs and runs augmented with automatic self-debugging, the module-grounded approach attains an overall success rate above 80 %, maintains a low and stable silent-failure rate, and reaches 100 % success for the three high-capability models, whereas unconstrained Python code generation succeeds in only about 5 % of runs and sees its silent-failure rate rise from roughly 16 % to about 40 % under self-debugging. These results suggest that increasing LLM capability does not remove the reliability problem in scientific scripting; it makes silent failures more consequential by making incorrect outputs more convincing. The answer to the trust question posed in the title is therefore conditional: unconstrained code generation is not trustworthy for complex ESM analysis, whereas module-grounded workflow composition can be highly reliable for frontier models and remains substantially more robust under iterative self-debugging. By shifting the LLM's role from code generation to the composition of trusted tools, this framework provides a safer, more scalable architecture for AI-assisted scientific discovery that is aligned with FAIR (findable, accessible, interoperable, and reusable) principles.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.
Share
Tian Zhou, Yun Qian, and L. Ruby Leung

Status: open (until 22 Jun 2026)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
Tian Zhou, Yun Qian, and L. Ruby Leung
Tian Zhou, Yun Qian, and L. Ruby Leung
Metrics will be available soon.
Latest update: 27 Apr 2026
Download
Short summary
AI can now write scientific analysis code that looks correct but quietly produces wrong results. We tested whether asking the assistant to assemble analyses from a library of validated building blocks, rather than write code from scratch, makes it more reliable. Across six language models and seven Earth science tasks, free code writing succeeded only about five percent of the time while the constrained approach exceeded eighty percent, suggesting trust needs guardrails.
Share