Preprints
https://doi.org/10.5194/egusphere-2024-449
https://doi.org/10.5194/egusphere-2024-449
12 Mar 2024
 | 12 Mar 2024
Status: this preprint is open for discussion.

Exploring the opportunities and challenges of using large language models to represent institutional agency in land system modelling

Yongchao Zeng, Calum Brown, Joanna Raymond, Mohamed Byari, Ronja Hotz, and Mark Rounsevell

Abstract. Public policy institutions play crucial roles in the land system, but modelling their policy-making processes is challenging. Large Language Models (LLMs) offer a novel approach to simulating many different types of human decision-making, including policy choices. This paper aims to investigate the opportunities and challenges that LLMs bring to land system modelling by integrating LLM-powered institutional agents within an agent-based, land use model. Four types of LLM agents are examined, all of which, in the examples presented here, use taxes to steer meat production toward a target level. The LLM agents provide reasoning and policy action output. The agents’ performance is benchmarked against two baseline scenarios: one without policy interventions and another implementing optimal policy actions determined through a genetic algorithm. The findings show that while LLM agents perform better than the non-intervention scenario, they fall short of the performance achieved by optimal policy actions. However, LLM agents demonstrate behaviour and decision-making, marked by policy consistency and transparent reasoning. This includes generating strategies such as incrementalism, delayed policy action, proactive policy adjustments, and balancing multiple stakeholder interests. Agents equipped with experiential learning capabilities excel in achieving policy objectives through progressive policy actions. The order in which reasoning and proposed policy actions are output has a notable effect on the agents’ performance, suggesting that enforced reasoning guides as well as explains LLM decisions. The approach presented here points to promising opportunities and significant challenges. The opportunities include, exploring naturalistic institutional decision-making, handling massive institutional documents, and human-AI cooperation. Challenges mainly lie in the scalability, interpretability, and reliability of LLMs.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this preprint. The responsibility to include appropriate place names lies with the authors.
Yongchao Zeng, Calum Brown, Joanna Raymond, Mohamed Byari, Ronja Hotz, and Mark Rounsevell

Status: open (until 27 Nov 2024)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on egusphere-2024-449', Anonymous Referee #1, 11 Apr 2024 reply
  • CC1: 'Generalisability & scalability', Oliver Perkins, 13 Apr 2024 reply
  • RC2: 'Comment on egusphere-2024-449', Oliver Perkins, 12 Nov 2024 reply
Yongchao Zeng, Calum Brown, Joanna Raymond, Mohamed Byari, Ronja Hotz, and Mark Rounsevell
Yongchao Zeng, Calum Brown, Joanna Raymond, Mohamed Byari, Ronja Hotz, and Mark Rounsevell

Viewed

Total article views: 629 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
429 167 33 629 45 22
  • HTML: 429
  • PDF: 167
  • XML: 33
  • Total: 629
  • BibTeX: 45
  • EndNote: 22
Views and downloads (calculated since 12 Mar 2024)
Cumulative views and downloads (calculated since 12 Mar 2024)

Viewed (geographical distribution)

Total article views: 650 (including HTML, PDF, and XML) Thereof 650 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 21 Nov 2024
Download
Short summary
This study explores using Large Language Models (LLMs) to simulate policy-making in land systems. We integrated LLMs into a land use model and simulated LLM-powered institutional agents steering meat production by taxation. The results show LLMs can generate boundedly rational policy-making behaviours that can hardly be modelled using conventional methods; LLMs can offer the reasoning behind policy actions. We also discussed LLMs’ potential and challenges in large-scale simulations.