ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05254
113
8
v1v2 (latest)

GLEE: A Unified Framework and Benchmark for Language-based Economic Environments

7 October 2024
Eilam Shapira
Omer Madmon
Itamar Reinman
S. Amouyal
Roi Reichart
Moshe Tennenholtz
ArXiv (abs)PDFHTML
Main:9 Pages
23 Figures
Bibliography:4 Pages
12 Tables
Appendix:19 Pages
Abstract

Large Language Models (LLMs) show significant potential in economic and strategic interactions, where communication via natural language is often prevalent. This raises key questions: Do LLMs behave rationally? Can they mimic human behavior? Do they tend to reach an efficient and fair outcome? What is the role of natural language in the strategic interaction? How do characteristics of the economic environment influence these dynamics? These questions become crucial concerning the economic and societal implications of integrating LLM-based agents into real-world data-driven systems, such as online retail platforms and recommender systems. While the ML community has been exploring the potential of LLMs in such multi-agent setups, varying assumptions, design choices and evaluation criteria across studies make it difficult to draw robust and meaningful conclusions. To address this, we introduce a benchmark for standardizing research on two-player, sequential, language-based games. Inspired by the economic literature, we define three base families of games with consistent parameterization, degrees of freedom and economic measures to evaluate agents' performance (self-gain), as well as the game outcome (efficiency and fairness). We develop an open-source framework for interaction simulation and analysis, and utilize it to collect a dataset of LLM vs. LLM interactions across numerous game configurations and an additional dataset of human vs. LLM interactions. Through extensive experimentation, we demonstrate how our framework and dataset can be used to: (i) compare the behavior of LLM-based agents to human players in various economic contexts; (ii) evaluate agents in both individual and collective performance measures; and (iii) quantify the effect of the economic characteristics of the environments on the behavior of agents.

View on arXiv
@article{shapira2025_2410.05254,
  title={ GLEE: A Unified Framework and Benchmark for Language-based Economic Environments },
  author={ Eilam Shapira and Omer Madmon and Itamar Reinman and Samuel Joseph Amouyal and Roi Reichart and Moshe Tennenholtz },
  journal={arXiv preprint arXiv:2410.05254},
  year={ 2025 }
}
Comments on this paper