Large Language Models are Near-Optimal Decision-Makers with a Non-Human Learning Behavior

Human decision-making belongs to the foundation of our society and civilization, but we are on the verge of a future where much of it will be delegated to artificial intelligence. The arrival of Large Language Models (LLMs) has transformed the nature and scope of AI-supported decision-making; however, the process by which they learn to make decisions, compared to humans, remains poorly understood. In this study, we examined the decision-making behavior of five leading LLMs across three core dimensions of real-world decision-making: uncertainty, risk, and set-shifting. Using three well-established experimental psychology tasks designed to probe these dimensions, we benchmarked LLMs against 360 newly recruited human participants. Across all tasks, LLMs often outperformed humans, approaching near-optimal performance. Moreover, the processes underlying their decisions diverged fundamentally from those of humans. On the one hand, our finding demonstrates the ability of LLMs to manage uncertainty, calibrate risk, and adapt to changes. On the other hand, this disparity highlights the risks of relying on them as substitutes for human judgment, calling for further inquiry.
View on arXiv@article{li2025_2506.16163, title={ Large Language Models are Near-Optimal Decision-Makers with a Non-Human Learning Behavior }, author={ Hao Li and Gengrui Zhang and Petter Holme and Shuyue Hu and Zhen Wang }, journal={arXiv preprint arXiv:2506.16163}, year={ 2025 } }