ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.21299
44
0

BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models

30 April 2025
Zhiting Fan
Ruizhe Chen
Zuozhu Liu
ArXivPDFHTML
Abstract

Identifying bias in LLM-generated content is a crucial prerequisite for ensuring fairness in LLMs. Existing methods, such as fairness classifiers and LLM-based judges, face limitations related to difficulties in understanding underlying intentions and the lack of criteria for fairness judgment. In this paper, we introduce BiasGuard, a novel bias detection tool that explicitly analyzes inputs and reasons through fairness specifications to provide accurate judgments. BiasGuard is implemented through a two-stage approach: the first stage initializes the model to explicitly reason based on fairness specifications, while the second stage leverages reinforcement learning to enhance its reasoning and judgment capabilities. Our experiments, conducted across five datasets, demonstrate that BiasGuard outperforms existing tools, improving accuracy and reducing over-fairness misjudgments. We also highlight the importance of reasoning-enhanced decision-making and provide evidence for the effectiveness of our two-stage optimization pipeline.

View on arXiv
@article{fan2025_2504.21299,
  title={ BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models },
  author={ Zhiting Fan and Ruizhe Chen and Zuozhu Liu },
  journal={arXiv preprint arXiv:2504.21299},
  year={ 2025 }
}
Comments on this paper