ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.10131
  4. Cited By
She Elicits Requirements and He Tests: Software Engineering Gender Bias
  in Large Language Models

She Elicits Requirements and He Tests: Software Engineering Gender Bias in Large Language Models

17 March 2023
Christoph Treude
Hideaki Hata
ArXivPDFHTML

Papers citing "She Elicits Requirements and He Tests: Software Engineering Gender Bias in Large Language Models"

3 / 3 papers shown
Title
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
59
3
0
29 Aug 2024
Mitigating Political Bias in Language Models Through Reinforced
  Calibration
Mitigating Political Bias in Language Models Through Reinforced Calibration
Ruibo Liu
Chenyan Jia
Jason W. Wei
Guangxuan Xu
Lili Wang
Soroush Vosoughi
45
99
0
30 Apr 2021
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
88
325
0
05 Apr 2019
1