Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.15087
Cited By
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models
26 June 2023
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models"
22 / 22 papers shown
Title
FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity
Fanny Jourdan
Yannick Chevalier
Cécile Favre
32
0
0
22 Apr 2025
An Empirically-grounded tool for Automatic Prompt Linting and Repair: A Case Study on Bias, Vulnerability, and Optimization in Developer Prompts
Dhia Elhaq Rzig
Dhruba Jyoti Paul
Kaiser Pister
Jordan Henkel
Foyzul Hassan
80
0
0
21 Jan 2025
LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases
Dylan Bouchard
Mohit Singh Chauhan
David Skarbrevik
Viren Bajaj
Zeya Ahmad
43
0
0
06 Jan 2025
Boardwalk Empire: How Generative AI is Revolutionizing Economic Paradigms
Subramanyam Sahoo
Kamlesh Dutta
33
1
0
19 Oct 2024
Speciesism in Natural Language Processing Research
Masashi Takeshita
Rafal Rzepka
24
1
0
18 Oct 2024
Data Defenses Against Large Language Models
William Agnew
Harry H. Jiang
Cella Sum
Maarten Sap
Sauvik Das
AAML
28
0
0
17 Oct 2024
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
26
0
0
09 Sep 2024
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Kunsheng Tang
Wenbo Zhou
Jie Zhang
Aishan Liu
Gelei Deng
Shuai Li
Peigui Qi
Weiming Zhang
Tianwei Zhang
Nenghai Yu
46
3
0
22 Aug 2024
Evaluation of Large Language Models: STEM education and Gender Stereotypes
Smilla Due
Sneha Das
Marianne Andersen
Berta Plandolit López
Sniff Andersen Nexø
Line Clemmensen
39
1
0
14 Jun 2024
An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics
Alva Markelius
43
1
0
10 Jun 2024
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art
Chen Cecilia Liu
Iryna Gurevych
Anna Korhonen
33
5
0
06 Jun 2024
Navigating LLM Ethics: Advancements, Challenges, and Future Directions
Junfeng Jiao
S. Afroogh
Yiming Xu
Connor Phillips
AILaw
68
20
0
14 May 2024
Foundation Model for Advancing Healthcare: Challenges, Opportunities, and Future Directions
Yuting He
Fuxiang Huang
Xinrui Jiang
Yuxiang Nie
Minghao Wang
Jiguang Wang
Hao Chen
LM&MA
AI4CE
84
28
0
04 Apr 2024
Robust Pronoun Fidelity with English LLMs: Are they Reasoning, Repeating, or Just Biased?
Vagrant Gautam
Eileen Bingert
D. Zhu
Anne Lauscher
Dietrich Klakow
45
8
0
04 Apr 2024
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
48
33
0
31 Mar 2024
A Piece of Theatre: Investigating How Teachers Design LLM Chatbots to Assist Adolescent Cyberbullying Education
Michael A. Hedderich
Natalie N. Bazarova
Wenting Zou
Ryun Shim
Xinda Ma
Qian Yang
22
23
0
27 Feb 2024
A Group Fairness Lens for Large Language Models
Guanqun Bi
Lei Shen
Yuqiang Xie
Yanan Cao
Tiangang Zhu
Xiao-feng He
ALM
34
4
0
24 Dec 2023
A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly
Yifan Yao
Jinhao Duan
Kaidi Xu
Yuanfang Cai
Eric Sun
Yue Zhang
PILM
ELM
52
476
0
04 Dec 2023
Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models
Hsuan Su
Cheng-Chu Cheng
Hua Farn
Shachi H. Kumar
Saurav Sahay
Shang-Tse Chen
Hung-yi Lee
31
4
0
17 Oct 2023
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
48
21
0
03 Sep 2023
Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
Organizers of QueerInAI
Nathaniel Dennler
Anaelia Ovalle
Ashwin Singh
Luca Soldaini
...
Kyra Yee
Irene Font Peradejordi
Zeerak Talat
Mayra Russo
Jessica de Jesus de Pinho Pinhal
26
15
0
15 Jul 2023
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
1