ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16814
51
0

When Debate Fails: Bias Reinforcement in Large Language Models

21 March 2025
Jihwan Oh
Minchan Jeong
Jongwoo Ko
Se-Young Yun
    LLMAG
    AI4CE
ArXivPDFHTML
Abstract

Large Language Models (((LLMs))) solve complex problems using training-free methods like prompt engineering and in-context learning, yet ensuring reasoning correctness remains challenging. While self-correction methods such as self-consistency and self-refinement aim to improve reliability, they often reinforce biases due to the lack of effective feedback mechanisms. Multi-Agent Debate (((MAD))) has emerged as an alternative, but we identify two key limitations: bias reinforcement, where debate amplifies model biases instead of correcting them, and lack of perspective diversity, as all agents share the same model and reasoning patterns, limiting true debate effectiveness. To systematically evaluate these issues, we introduce MetaNIM Arena\textit{MetaNIM Arena}MetaNIM Arena, a benchmark designed to assess LLMs in adversarial strategic decision-making, where dynamic interactions influence optimal decisions. To overcome MAD's limitations, we propose DReaMAD\textbf{DReaMAD}DReaMAD (((D\textbf{D}Diverse Rea\textbf{Rea}Reasoning via M\textbf{M}Multi-A\textbf{A}Agent D\textbf{D}Debate with Refined Prompt))), a novel framework that (1)(1)(1) refines LLM's strategic prior knowledge to improve reasoning quality and (2)(2)(2) promotes diverse viewpoints within a single model by systematically modifying prompts, reducing bias. Empirical results show that DReaMAD\textbf{DReaMAD}DReaMAD significantly improves decision accuracy, reasoning diversity, and bias mitigation across multiple strategic tasks, establishing it as a more effective approach for LLM-based decision-making.

View on arXiv
@article{oh2025_2503.16814,
  title={ When Debate Fails: Bias Reinforcement in Large Language Models },
  author={ Jihwan Oh and Minchan Jeong and Jongwoo Ko and Se-Young Yun },
  journal={arXiv preprint arXiv:2503.16814},
  year={ 2025 }
}
Comments on this paper