Large Language Models (LLMs) often exhibit gender bias, resulting in unequal treatment of male and female subjects across different contexts. To address this issue, we propose a novel data generation framework that fosters exploratory thinking in LLMs. Our approach prompts models to generate story pairs featuring male and female protagonists in structurally identical, morally ambiguous scenarios, then elicits and compares their moral judgments. When inconsistencies arise, the model is guided to produce balanced, gender-neutral judgments. These story-judgment pairs are used to fine-tune or optimize the models via Direct Preference Optimization (DPO). Experimental results show that our method significantly reduces gender bias while preserving or even enhancing general model capabilities. We will release the code and generated data.
View on arXiv@article{wei2025_2505.17217, title={ Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs }, author={ Kangda Wei and Hasnat Md Abdullah and Ruihong Huang }, journal={arXiv preprint arXiv:2505.17217}, year={ 2025 } }