ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.10436
  4. Cited By
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large
  Language Models

I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large Language Models

16 February 2024
Wenchao Dong
Assem Zhunis
Hyojin Chin
Jiyoung Han
Meeyoung Cha
ArXiv (abs)PDFHTML

Papers citing "I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large Language Models"

17 / 17 papers shown
Title
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
116
195
0
13 Oct 2023
Shaping the Emerging Norms of Using Large Language Models in Social
  Computing Research
Shaping the Emerging Norms of Using Large Language Models in Social Computing Research
Hong Shen
Tianshi Li
Toby Jia-Jun Li
J. Park
Diyi Yang
76
37
0
09 Jul 2023
Multilingual Language Models are not Multicultural: A Case Study in
  Emotion
Multilingual Language Models are not Multicultural: A Case Study in Emotion
Shreya Havaldar
Sunny Rai
Bhumika Singhal
Langchen Liu
Langchen Liu Sharath Chandra Guntuku
Lyle Ungar
104
61
0
03 Jul 2023
ChatGPT Is More Likely to Be Perceived as Male Than Female
ChatGPT Is More Likely to Be Perceived as Male Than Female
Jared Wong
Jin Kim
DeLMO
40
13
0
21 May 2023
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language
  Models
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
Emilio Ferrara
SILM
111
262
0
07 Apr 2023
Towards Interpretable Mental Health Analysis with Large Language Models
Towards Interpretable Mental Health Analysis with Large Language Models
Kailai Yang
Shaoxiong Ji
Tianlin Zhang
Qianqian Xie
Zi-Zhou Kuang
Sophia Ananiadou
ELMAI4MHLRM
92
61
0
06 Apr 2023
Whose Opinions Do Language Models Reflect?
Whose Opinions Do Language Models Reflect?
Shibani Santurkar
Esin Durmus
Faisal Ladhak
Cinoo Lee
Percy Liang
Tatsunori Hashimoto
86
446
0
30 Mar 2023
The Myth of Culturally Agnostic AI Models
The Myth of Culturally Agnostic AI Models
E. Cetinic
DiffMVLM
33
11
0
28 Nov 2022
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
  Homogenization?
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
Rishi Bommasani
Kathleen A. Creel
Ananya Kumar
Dan Jurafsky
Percy Liang
78
88
0
25 Nov 2022
Out of One, Many: Using Language Models to Simulate Human Samples
Out of One, Many: Using Language Models to Simulate Human Samples
Lisa P. Argyle
Ethan C. Busby
Nancy Fulda
Joshua R Gubler
Christopher Rytting
David Wingate
SyDa
103
602
0
14 Sep 2022
In conversation with Artificial Intelligence: aligning language models
  with human values
In conversation with Artificial Intelligence: aligning language models with human values
Atoosa Kasirzadeh
Iason Gabriel
118
105
0
01 Sep 2022
Using Large Language Models to Simulate Multiple Humans and Replicate
  Human Subject Studies
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
Gati Aher
RosaI. Arriaga
Adam Tauman Kalai
134
404
0
18 Aug 2022
Human heuristics for AI-generated language are flawed
Human heuristics for AI-generated language are flawed
Maurice Jakesch
Jeffrey T. Hancock
Mor Naaman
DeLMO
85
189
0
15 Jun 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
900
13,228
0
04 Mar 2022
Mitigating Political Bias in Language Models Through Reinforced
  Calibration
Mitigating Political Bias in Language Models Through Reinforced Calibration
Ruibo Liu
Chenyan Jia
Jason W. Wei
Guangxuan Xu
Lili Wang
Soroush Vosoughi
71
99
0
30 Apr 2021
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
156
1,413
0
28 Nov 2019
Quantifying Search Bias: Investigating Sources of Bias for Political
  Searches in Social Media
Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media
Juhi Kulshrestha
Motahhare Eslami
Johnnatan Messias
Muhammad Bilal Zafar
Saptarshi Ghosh
Krishna P. Gummadi
Karrie Karahalios
83
224
0
05 Apr 2017
1