54
0

An Empirical Study of Group Conformity in Multi-Agent Systems

Main:8 Pages
6 Figures
Bibliography:3 Pages
6 Tables
Appendix:6 Pages
Abstract

Recent advances in Large Language Models (LLMs) have enabled multi-agent systems that simulate real-world interactions with near-human reasoning. While previous studies have extensively examined biases related to protected attributes such as race, the emergence and propagation of biases on socially contentious issues in multi-agent LLM interactions remain underexplored. This study explores how LLM agents shape public opinion through debates on five contentious topics. By simulating over 2,500 debates, we analyze how initially neutral agents, assigned a centrist disposition, adopt specific stances over time. Statistical analyses reveal significant group conformity mirroring human behavior; LLM agents tend to align with numerically dominant groups or more intelligent agents, exerting a greater influence. These findings underscore the crucial role of agent intelligence in shaping discourse and highlight the risks of bias amplification in online interactions. Our results emphasize the need for policy measures that promote diversity and transparency in LLM-generated discussions to mitigate the risks of bias propagation within anonymous online environments.

View on arXiv
@article{choi2025_2506.01332,
  title={ An Empirical Study of Group Conformity in Multi-Agent Systems },
  author={ Min Choi and Keonwoo Kim and Sungwon Chae and Sangyeob Baek },
  journal={arXiv preprint arXiv:2506.01332},
  year={ 2025 }
}
Comments on this paper