12
0

An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations

Abstract

The rise of Large Language Models (LLMs) like ChatGPT has advanced natural language processing, yet concerns about cognitive biases are growing. In this paper, we investigate the anchoring effect, a cognitive bias where the mind relies heavily on the first information as anchors to make affected judgments. We explore whether LLMs are affected by anchoring, the underlying mechanisms, and potential mitigation strategies. To facilitate studies at scale on the anchoring effect, we introduce a new dataset, SynAnchors. Combining refined evaluation metrics, we benchmark current widely used LLMs. Our findings show that LLMs' anchoring bias exists commonly with shallow-layer acting and is not eliminated by conventional strategies, while reasoning can offer some mitigation. This recontextualization via cognitive psychology urges that LLM evaluations focus not on standard benchmarks or over-optimized robustness tests, but on cognitive-bias-aware trustworthy evaluation.

View on arXiv
@article{huang2025_2505.15392,
  title={ An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations },
  author={ Yiming Huang and Biquan Bie and Zuqiu Na and Weilin Ruan and Songxin Lei and Yutao Yue and Xinlei He },
  journal={arXiv preprint arXiv:2505.15392},
  year={ 2025 }
}
Comments on this paper