Cognitive Debiasing Large Language Models for Decision-Making

Main:7 Pages
7 Figures
Bibliography:4 Pages
5 Tables
Appendix:4 Pages
Abstract
Large language models (LLMs) have shown potential in supporting decision-making applications, particularly as personal conversational assistants in the financial, healthcare, and legal domains. While prompt engineering strategies have enhanced the capabilities of LLMs in decision-making, cognitive biases inherent to LLMs present significant challenges. Cognitive biases are systematic patterns of deviation from norms or rationality in decision-making that can lead to the production of inaccurate outputs. Existing cognitive bias mitigation strategies assume that input prompts contain (exactly) one type of cognitive bias and therefore fail to perform well in realistic settings where there maybe any number of biases.
View on arXiv@article{lyu2025_2504.04141, title={ Cognitive Debiasing Large Language Models for Decision-Making }, author={ Yougang Lyu and Shijie Ren and Yue Feng and Zihan Wang and Zhumin Chen and Zhaochun Ren and Maarten de Rijke }, journal={arXiv preprint arXiv:2504.04141}, year={ 2025 } }
Comments on this paper