the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Exploring the opportunities and challenges of using large language models to represent institutional agency in land system modelling
Abstract. Public policy institutions play crucial roles in the land system, but modelling their policy-making processes is challenging. Large Language Models (LLMs) offer a novel approach to simulating many different types of human decision-making, including policy choices. This paper aims to investigate the opportunities and challenges that LLMs bring to land system modelling by integrating LLM-powered institutional agents within an agent-based, land use model. Four types of LLM agents are examined, all of which, in the examples presented here, use taxes to steer meat production toward a target level. The LLM agents provide reasoning and policy action output. The agents’ performance is benchmarked against two baseline scenarios: one without policy interventions and another implementing optimal policy actions determined through a genetic algorithm. The findings show that while LLM agents perform better than the non-intervention scenario, they fall short of the performance achieved by optimal policy actions. However, LLM agents demonstrate behaviour and decision-making, marked by policy consistency and transparent reasoning. This includes generating strategies such as incrementalism, delayed policy action, proactive policy adjustments, and balancing multiple stakeholder interests. Agents equipped with experiential learning capabilities excel in achieving policy objectives through progressive policy actions. The order in which reasoning and proposed policy actions are output has a notable effect on the agents’ performance, suggesting that enforced reasoning guides as well as explains LLM decisions. The approach presented here points to promising opportunities and significant challenges. The opportunities include, exploring naturalistic institutional decision-making, handling massive institutional documents, and human-AI cooperation. Challenges mainly lie in the scalability, interpretability, and reliability of LLMs.
- Preprint
(2119 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 15 Nov 2024)
-
RC1: 'Comment on egusphere-2024-449', Anonymous Referee #1, 11 Apr 2024
reply
In this manuscript, the authors describe their work inducing a large language model to “role-play” as various kinds of policy decisionmakers in an agent-based land use model. While a human operator needs to stay in the loop to keep the LLM on task and producing output in the correct format, the agents—when properly prompted—are capable of producing policy actions that achieve their goal. As befits such a novel method, the authors do more than just using the policy actions output by the model; they also dig in to the apparent “reasoning” behind its actions.
This is a fascinating piece of research. The paper is composed logically, well-written, and the figures are clear. However, I do have a number of comments, the most important of which relate to the manuscript’s eliding of how LLMs actually work. Once these are addressed, though, it will stand as an important, foundational contribution to the use of LLMs in agent-based land use modeling.
Please see the attached PDF for my comments.
-
CC1: 'Generalisability & scalability', Oliver Perkins, 13 Apr 2024
reply
Dear authors,
I greatly enjoyed reading this impressive work. Please find enclosed some questions and comments, which relate primarily to the scalability and generalisability of what you have achieved here.
All best
Ol Perkins
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
395 | 140 | 29 | 564 | 42 | 20 |
- HTML: 395
- PDF: 140
- XML: 29
- Total: 564
- BibTeX: 42
- EndNote: 20
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1