ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.11718
  4. Cited By
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
  Generation

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation

27 January 2021
Jwala Dhamala
Tony Sun
Varun Kumar
Satyapriya Krishna
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
ArXivPDFHTML

Papers citing "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation"

50 / 81 papers shown
Title
BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models
BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models
Zhiting Fan
Ruizhe Chen
Zuozhu Liu
44
0
0
30 Apr 2025
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Takuma Udagawa
Yang Zhao
H. Kanayama
Bishwaranjan Bhattacharjee
33
0
0
19 Apr 2025
Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages
Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages
Alessio Buscemi
Cedric Lothritz
Sergio Morales
Marcos Gomez-Vazquez
Robert Clarisó
Jordi Cabot
German Castignani
33
0
0
19 Apr 2025
Benchmarking Multi-National Value Alignment for Large Language Models
Benchmarking Multi-National Value Alignment for Large Language Models
Chengyi Ju
Weijie Shi
Chengzhong Liu
Yalan Qin
Jipeng Zhang
...
Jia Zhu
Jiajie Xu
Yaodong Yang
Sirui Han
Yike Guo
199
0
0
17 Apr 2025
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
54
2
0
04 Feb 2025
Unmasking Conversational Bias in AI Multiagent Systems
Unmasking Conversational Bias in AI Multiagent Systems
Simone Mungari
Giuseppe Manco
Luca Maria Aiello
LLMAG
56
0
0
24 Jan 2025
LLMScan: Causal Scan for LLM Misbehavior Detection
LLMScan: Causal Scan for LLM Misbehavior Detection
Mengdi Zhang
Kai Kiat Goh
Peixin Zhang
Jun Sun
Rose Lin Xin
Hongyu Zhang
28
0
0
22 Oct 2024
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
41
3
0
10 Oct 2024
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
Philipp Guldimann
Alexander Spiridonov
Robin Staab
Nikola Jovanović
Mark Vero
...
Mislav Balunović
Nikola Konstantinov
Pavol Bielik
Petar Tsankov
Martin Vechev
ELM
53
5
0
10 Oct 2024
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Robert D Morabito
Sangmitra Madhusudan
Tyler McDonald
Ali Emami
31
0
0
20 Sep 2024
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Kunsheng Tang
Wenbo Zhou
Jie Zhang
Aishan Liu
Gelei Deng
Shuai Li
Peigui Qi
Weiming Zhang
Tianwei Zhang
Nenghai Yu
46
3
0
22 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
13
0
27 Jul 2024
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Riccardo Cantini
Giada Cosenza
A. Orsino
Domenico Talia
AAML
62
5
0
11 Jul 2024
On Speeding Up Language Model Evaluation
On Speeding Up Language Model Evaluation
Jin Peng Zhou
Christian K. Belardi
Ruihan Wu
Travis Zhang
Carla P. Gomes
Wen Sun
Kilian Q. Weinberger
58
1
0
08 Jul 2024
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM
  Compression
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM Compression
Zhichao Xu
Ashim Gupta
Tao Li
Oliver Bentham
Vivek Srikumar
52
8
0
06 Jul 2024
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
Song Wang
Peng Wang
Tong Zhou
Yushun Dong
Zhen Tan
Jundong Li
CoGe
63
7
0
02 Jul 2024
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Han Jiang
Xiaoyuan Yi
Zhihua Wei
Shu Wang
Xing Xie
Xing Xie
ALM
ELM
54
5
0
20 Jun 2024
CHiSafetyBench: A Chinese Hierarchical Safety Benchmark for Large
  Language Models
CHiSafetyBench: A Chinese Hierarchical Safety Benchmark for Large Language Models
Wenjing Zhang
Xuejiao Lei
Zhaoxiang Liu
Meijuan An
Bikun Yang
Kaikai Zhao
Kai Wang
Shiguo Lian
ELM
36
7
0
14 Jun 2024
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in
  Large Language Models
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models
Jisu Shin
Hoyun Song
Huije Lee
Soyeong Jeong
Jong C. Park
38
6
0
06 Jun 2024
Large Language Models Meet NLP: A Survey
Large Language Models Meet NLP: A Survey
Libo Qin
Qiguang Chen
Xiachong Feng
Yang Wu
Yongheng Zhang
Hai-Tao Zheng
Min Li
Wanxiang Che
Philip S. Yu
ALM
LM&MA
ELM
LRM
52
49
0
21 May 2024
Exploring Subjectivity for more Human-Centric Assessment of Social
  Biases in Large Language Models
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui
Sharon Ferguson
Anastasia Kuzminykh
55
0
0
17 May 2024
Quite Good, but Not Enough: Nationality Bias in Large Language Models --
  A Case Study of ChatGPT
Quite Good, but Not Enough: Nationality Bias in Large Language Models -- A Case Study of ChatGPT
Shucheng Zhu
Weikang Wang
Ying Liu
37
5
0
11 May 2024
Are Models Biased on Text without Gender-related Language?
Are Models Biased on Text without Gender-related Language?
Catarina G Belém
P. Seshadri
Yasaman Razeghi
Sameer Singh
46
8
0
01 May 2024
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Evan Shieh
Faye-Marie Vassel
Cassidy R. Sugimoto
T. Monroe-White
37
3
0
11 Apr 2024
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
60
32
0
08 Apr 2024
HyperCLOVA X Technical Report
HyperCLOVA X Technical Report
Kang Min Yoo
Jaegeun Han
Sookyo In
Heewon Jeon
Jisu Jeong
...
Hyunkyung Noh
Se-Eun Choi
Sang-Woo Lee
Jung Hwa Lim
Nako Sung
VLM
37
8
0
02 Apr 2024
Measuring Political Bias in Large Language Models: What Is Said and How
  It Is Said
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Yejin Bang
Delong Chen
Nayeon Lee
Pascale Fung
40
26
0
27 Mar 2024
Detecting Bias in Large Language Models: Fine-tuned KcBERT
Detecting Bias in Large Language Models: Fine-tuned KcBERT
J. K. Lee
T. M. Chung
32
0
0
16 Mar 2024
Evaluating LLMs for Gender Disparities in Notable Persons
Evaluating LLMs for Gender Disparities in Notable Persons
L. Rhue
Sofie Goethals
Arun Sundararajan
52
5
0
14 Mar 2024
COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models
COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models
Priyanshul Govil
Hemang Jain
Vamshi Krishna Bonagiri
Aman Chadha
Ponnurangam Kumaraguru
Manas Gaur
Sanorita Dey
55
2
0
22 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
36
78
0
25 Jan 2024
Benchmarking LLMs via Uncertainty Quantification
Benchmarking LLMs via Uncertainty Quantification
Fanghua Ye
Mingming Yang
Jianhui Pang
Longyue Wang
Derek F. Wong
Emine Yilmaz
Shuming Shi
Zhaopeng Tu
ELM
23
47
0
23 Jan 2024
LLM360: Towards Fully Transparent Open-Source LLMs
LLM360: Towards Fully Transparent Open-Source LLMs
Zhengzhong Liu
Aurick Qiao
Willie Neiswanger
Hongyi Wang
Bowen Tan
...
Zhiting Hu
Mark Schulze
Preslav Nakov
Timothy Baldwin
Eric Xing
49
70
0
11 Dec 2023
JAB: Joint Adversarial Prompting and Belief Augmentation
JAB: Joint Adversarial Prompting and Belief Augmentation
Ninareh Mehrabi
Palash Goyal
Anil Ramakrishna
Jwala Dhamala
Shalini Ghosh
Richard Zemel
Kai-Wei Chang
Aram Galstyan
Rahul Gupta
AAML
36
7
0
16 Nov 2023
Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in
  Large Language Models
Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models
Wenxuan Wang
Wenxiang Jiao
Jingyuan Huang
Ruyi Dai
Jen-tse Huang
Zhaopeng Tu
Michael R. Lyu
54
27
0
19 Oct 2023
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
34
163
0
13 Oct 2023
No Offense Taken: Eliciting Offensiveness from Language Models
No Offense Taken: Eliciting Offensiveness from Language Models
Anugya Srivastava
Rahul Ahuja
Rohith Mukku
16
3
0
02 Oct 2023
Foundation Metrics for Evaluating Effectiveness of Healthcare
  Conversations Powered by Generative AI
Foundation Metrics for Evaluating Effectiveness of Healthcare Conversations Powered by Generative AI
Mahyar Abbasian
Elahe Khatibi
Iman Azimi
David Oniani
Zahra Shakeri Hossein Abad
...
Bryant Lin
Olivier Gevaert
Li-Jia Li
Ramesh C. Jain
Amir M. Rahmani
LM&MA
ELM
AI4MH
43
66
0
21 Sep 2023
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
Patrick Haller
Ansar Aynetdinov
Alan Akbik
33
24
0
07 Sep 2023
FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes
  and Biases in Large Language Models
FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models
Yanhong Bai
Jiabao Zhao
Jinxin Shi
Tingjiang Wei
Xingjiao Wu
Liangbo He
36
0
0
21 Aug 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
60
0
20 Aug 2023
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Binfeng Xu
Xukun Liu
Hua Shen
Zeyu Han
Yuhan Li
Murong Yue
Zhi-Ping Peng
Yuchen Liu
Ziyu Yao
Dongkuan Xu
LLMAG
30
19
0
08 Aug 2023
Llama 2: Open Foundation and Fine-Tuned Chat Models
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
...
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom
AI4MH
ALM
129
11,144
0
18 Jul 2023
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training Data
Anthony Sicilia
Malihe Alikhani
49
15
0
10 Jul 2023
CFL: Causally Fair Language Models Through Token-level Attribute
  Controlled Generation
CFL: Causally Fair Language Models Through Token-level Attribute Controlled Generation
Rahul Madhavan
Rishabh Garg
Kahini Wadhawan
S. Mehta
29
5
0
01 Jun 2023
An Efficient Multilingual Language Model Compression through Vocabulary
  Trimming
An Efficient Multilingual Language Model Compression through Vocabulary Trimming
Asahi Ushio
Yi Zhou
Jose Camacho-Collados
49
7
0
24 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
40
10
0
22 May 2023
"I'm fully who I am": Towards Centering Transgender and Non-Binary
  Voices to Measure Biases in Open Language Generation
"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Anaelia Ovalle
Palash Goyal
Jwala Dhamala
Zachary Jaggers
Kai-Wei Chang
Aram Galstyan
R. Zemel
Rahul Gupta
25
61
0
17 May 2023
Beyond the Safeguards: Exploring the Security Risks of ChatGPT
Beyond the Safeguards: Exploring the Security Risks of ChatGPT
Erik Derner
Kristina Batistic
SILM
35
65
0
13 May 2023
Emolysis: A Multimodal Open-Source Group Emotion Analysis and
  Visualization Toolkit
Emolysis: A Multimodal Open-Source Group Emotion Analysis and Visualization Toolkit
Shreya Ghosh
Zhixi Cai
Parul Gupta
Garima Sharma
Abhinav Dhall
Munawar Hayat
Tom Gedeon
29
2
0
09 May 2023
12
Next