ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.00137
117
4

Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment

31 July 2024
Sangwon Yu
Jongyoon Song
Bongkyu Hwang
Hoyoung Kang
Sooah Cho
Junhwa Choi
Seongho Joe
Taehee Lee
Youngjune Gwon
Sungroh Yoon
ArXivPDFHTML
Abstract

A binary decision task, like yes-no questions or answer verification, reflects a significant real-world scenario such as where users look for confirmation about the correctness of their decisions on specific issues. In this work, we observe that language models exhibit a negative bias in the binary decisions of complex reasoning tasks. Based on our observations and the rationale about attention-based model dynamics, we propose a negative attention score (NAS) to systematically and quantitatively formulate negative bias. Based on NAS, we identify attention heads that attend to negative tokens provided in the instructions as answer candidate of binary decisions, regardless of the question in the prompt, and validate their association with the negative bias. Additionally, we propose the negative attention score alignment (NASA) method, which is a parameter-efficient fine-tuning technique to address the extracted negatively biased attention heads. Experimental results from various domains of reasoning tasks and large model search space demonstrate that NASA significantly reduces the gap between precision and recall caused by negative bias while preserving their generalization abilities.

View on arXiv
@article{yu2025_2408.00137,
  title={ Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment },
  author={ Sangwon Yu and Jongyoon Song and Bongkyu Hwang and Hoyoung Kang and Sooah Cho and Junhwa Choi and Seongho Joe and Taehee Lee and Youngjune L. Gwon and Sungroh Yoon },
  journal={arXiv preprint arXiv:2408.00137},
  year={ 2025 }
}
Comments on this paper