ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXivPDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 310 papers shown
Title
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
Brandon Smith
Mohamed Reda Bouadjenek
Tahsin Alamgir Kheya
Phillip Dawson
S. Aryal
ALM
ELM
26
0
0
14 May 2025
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
31
0
0
05 May 2025
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Kewen Peng
Yicheng Yang
Hao Zhuo
32
0
0
23 Apr 2025
Generalization Bias in Large Language Model Summarization of Scientific Research
Generalization Bias in Large Language Model Summarization of Scientific Research
Uwe Peters
Benjamin Chin-Yee
ELM
41
0
0
28 Mar 2025
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
Rohitash Chandra
Aryan Chaudhary
Yeshwanth Rayavarapu
44
0
0
27 Mar 2025
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
45
0
0
25 Mar 2025
Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental
Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental
Roberto Balestri
46
0
0
18 Mar 2025
Implicit Bias-Like Patterns in Reasoning Models
Implicit Bias-Like Patterns in Reasoning Models
Messi H.J. Lee
Calvin K. Lai
LRM
58
0
0
14 Mar 2025
Benchmarking the rationality of AI decision making using the transitivity axiom
Benchmarking the rationality of AI decision making using the transitivity axiom
Kiwon Song
James M. Jennings III
Clintin P. Davis-Stober
41
0
0
14 Feb 2025
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
68
5
0
22 Jan 2025
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Xinyao Ma
Rui Zhu
Zihao Wang
Jingwei Xiong
Qingyu Chen
Haixu Tang
L. Jean Camp
Lucila Ohno-Machado
LM&MA
46
0
0
12 Jan 2025
Bridging the Fairness Gap: Enhancing Pre-trained Models with LLM-Generated Sentences
Bridging the Fairness Gap: Enhancing Pre-trained Models with LLM-Generated Sentences
Liu Yu
Ludie Guo
Ping Kuang
Fan Zhou
44
0
0
12 Jan 2025
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Yachao Zhao
Bo Wang
Yan Wang
50
2
0
04 Jan 2025
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
Wonduk Seo
Zonghao Yuan
Yi Bu
VLM
50
1
0
02 Jan 2025
Perception of Visual Content: Differences Between Humans and Foundation Models
Perception of Visual Content: Differences Between Humans and Foundation Models
Nardiena A. Pratama
Shaoyang Fan
Gianluca Demartini
VLM
97
0
0
28 Nov 2024
Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Carolin M. Schuster
Maria-Alexandra Dinisor
Shashwat Ghatiwala
Georg Groh
79
1
0
25 Nov 2024
Enabling Scalable Evaluation of Bias Patterns in Medical LLMs
Enabling Scalable Evaluation of Bias Patterns in Medical LLMs
Hamed Fayyaz
Raphael Poulain
Rahmatollah Beheshti
40
1
0
18 Oct 2024
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
32
0
0
17 Oct 2024
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
38
3
0
10 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
44
0
0
06 Oct 2024
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Guixian Zhang
Guan Yuan
Debo Cheng
Lin Liu
Jiuyong Li
Shichao Zhang
44
2
0
30 Sep 2024
Analyzing Correlations Between Intrinsic and Extrinsic Bias Metrics of
  Static Word Embeddings With Their Measuring Biases Aligned
Analyzing Correlations Between Intrinsic and Extrinsic Bias Metrics of Static Word Embeddings With Their Measuring Biases Aligned
Taisei Katô
Yusuke Miyao
19
0
0
14 Sep 2024
Identity-related Speech Suppression in Generative AI Content Moderation
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
26
0
0
09 Sep 2024
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Hila Gonen
Terra Blevins
Alisa Liu
Luke Zettlemoyer
Noah A. Smith
31
5
0
12 Aug 2024
Vectoring Languages
Vectoring Languages
Joseph Chen
33
0
0
16 Jul 2024
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
David Moats
Chandrima Ganguly
VLM
40
0
0
16 Jul 2024
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
Song Wang
Peng Wang
Tong Zhou
Yushun Dong
Zhen Tan
Jundong Li
CoGe
56
7
0
02 Jul 2024
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf
  Affect-related Tweet Classifiers
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf Affect-related Tweet Classifiers
Valentin Barriere
Sebastian Cifuentes
28
0
0
01 Jul 2024
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models
  via Counterfactual Probing
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing
Yisong Xiao
Aishan Liu
QianJia Cheng
Zhenfei Yin
Siyuan Liang
Jiapeng Li
Jing Shao
Xianglong Liu
Dacheng Tao
48
4
0
30 Jun 2024
Large Language Models are Biased Because They Are Large Language Models
Large Language Models are Biased Because They Are Large Language Models
Philip Resnik
24
8
0
19 Jun 2024
Do Large Language Models Discriminate in Hiring Decisions on the Basis
  of Race, Ethnicity, and Gender?
Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender?
Haozhe An
Christabel Acquaye
Colin Wang
Zongxia Li
Rachel Rudinger
36
12
0
15 Jun 2024
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in
  Large Language Models
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models
Jisu Shin
Hoyun Song
Huije Lee
Soyeong Jeong
Jong C. Park
38
6
0
06 Jun 2024
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Dyah Adila
Shuai Zhang
Boran Han
Yuyang Wang
AAML
LLMSV
34
6
0
05 Jun 2024
Exploring Subjectivity for more Human-Centric Assessment of Social
  Biases in Large Language Models
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui
Sharon Ferguson
Anastasia Kuzminykh
50
0
0
17 May 2024
Quite Good, but Not Enough: Nationality Bias in Large Language Models --
  A Case Study of ChatGPT
Quite Good, but Not Enough: Nationality Bias in Large Language Models -- A Case Study of ChatGPT
Shucheng Zhu
Weikang Wang
Ying Liu
37
5
0
11 May 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
53
2
0
06 May 2024
Influence of Solution Efficiency and Valence of Instruction on Additive
  and Subtractive Solution Strategies in Humans and GPT-4
Influence of Solution Efficiency and Valence of Instruction on Additive and Subtractive Solution Strategies in Humans and GPT-4
Lydia Uhler
Verena Jordan
Jürgen Buder
Markus Huff
F. Papenmeier
46
0
0
25 Apr 2024
REQUAL-LM: Reliability and Equity through Aggregation in Large Language
  Models
REQUAL-LM: Reliability and Equity through Aggregation in Large Language Models
Sana Ebrahimi
N. Shahbazi
Abolfazl Asudeh
37
1
0
17 Apr 2024
Closing the Gap in the Trade-off between Fair Representations and
  Accuracy
Closing the Gap in the Trade-off between Fair Representations and Accuracy
Biswajit Rout
Ananya B. Sai
Arun Rajkumar
FaML
16
0
0
15 Apr 2024
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
58
31
0
08 Apr 2024
Implications of the AI Act for Non-Discrimination Law and Algorithmic
  Fairness
Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness
Luca Deck
Jan-Laurin Müller
Conradin Braun
Domenique Zipperling
Niklas Kühl
FaML
41
5
0
29 Mar 2024
Evaluating LLMs for Gender Disparities in Notable Persons
Evaluating LLMs for Gender Disparities in Notable Persons
L. Rhue
Sofie Goethals
Arun Sundararajan
52
4
0
14 Mar 2024
Legally Binding but Unfair? Towards Assessing Fairness of Privacy
  Policies
Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies
Vincent Freiberger
Erik Buchmann
AILaw
38
5
0
12 Mar 2024
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
34
24
0
05 Mar 2024
What's in a Name? Auditing Large Language Models for Race and Gender Bias
What's in a Name? Auditing Large Language Models for Race and Gender Bias
Amit Haim
Alejandro Salinas
Julian Nyarko
53
32
0
21 Feb 2024
Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality
Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality
Rahul Zalkikar
Kanchan Chandra
31
1
0
21 Feb 2024
Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation
Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation
Kristian Lum
Jacy Reese Anthis
Chirag Nagpal
Alex DÁmour
Alexander D’Amour
31
14
0
20 Feb 2024
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and
  Prompt Engineering May Not Help You
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You
Felix Friedrich
Katharina Hämmerl
P. Schramowski
Manuel Brack
Jindrich Libovický
Kristian Kersting
Alexander Fraser
EGVM
24
10
0
29 Jan 2024
Quantifying Stereotypes in Language
Quantifying Stereotypes in Language
Yang Liu
38
1
0
28 Jan 2024
Legal and ethical implications of applications based on agreement
  technologies: the case of auction-based road intersections
Legal and ethical implications of applications based on agreement technologies: the case of auction-based road intersections
José-Antonio Santos
Alberto Fernández
Mar Moreno-Rebato
Holger Billhardt
José-A. Rodríguez-García
Sascha Ossowski
14
3
0
18 Jan 2024
1234567
Next