12
0

GenderBench: Evaluation Suite for Gender Biases in LLMs

Abstract

We present GenderBench -- a comprehensive evaluation suite designed to measure gender biases in LLMs. GenderBench includes 14 probes that quantify 19 gender-related harmful behaviors exhibited by LLMs. We release GenderBench as an open-source and extensible library to improve the reproducibility and robustness of benchmarking across the field. We also publish our evaluation of 12 LLMs. Our measurements reveal consistent patterns in their behavior. We show that LLMs struggle with stereotypical reasoning, equitable gender representation in generated texts, and occasionally also with discriminatory behavior in high-stakes scenarios, such as hiring.

View on arXiv
@article{pikuliak2025_2505.12054,
  title={ GenderBench: Evaluation Suite for Gender Biases in LLMs },
  author={ Matúš Pikuliak },
  journal={arXiv preprint arXiv:2505.12054},
  year={ 2025 }
}
Comments on this paper