ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11559
42
0

Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models

17 February 2025
Yue Xu
Chengyan Fu
Li Xiong
Sibei Yang
Wenjie Wang
ArXivPDFHTML
Abstract

Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise task performance. To address these limitations, we propose FaIRMaker\textit{FaIRMaker}FaIRMaker, an automated and model-independent framework that employs an auto-search and refinement\textbf{auto-search and refinement}auto-search and refinement paradigm to adaptively generate Fairwords, which act as instructions integrated into input queries to reduce gender bias and enhance response quality. Extensive experiments demonstrate that FaIRMaker\textit{FaIRMaker}FaIRMaker automatically searches for and dynamically refines Fairwords, effectively mitigating gender bias while preserving task integrity and ensuring compatibility with both API-based and open-source LLMs.

View on arXiv
@article{xu2025_2502.11559,
  title={ Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models },
  author={ Yue Xu and Chengyan Fu and Li Xiong and Sibei Yang and Wenjie Wang },
  journal={arXiv preprint arXiv:2502.11559},
  year={ 2025 }
}
Comments on this paper