ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.17548
  4. Cited By
Whose Opinions Do Language Models Reflect?

Whose Opinions Do Language Models Reflect?

30 March 2023
Shibani Santurkar
Esin Durmus
Faisal Ladhak
Cinoo Lee
Percy Liang
Tatsunori Hashimoto
ArXivPDFHTML

Papers citing "Whose Opinions Do Language Models Reflect?"

50 / 277 papers shown
Title
WorldValuesBench: A Large-Scale Benchmark Dataset for Multi-Cultural
  Value Awareness of Language Models
WorldValuesBench: A Large-Scale Benchmark Dataset for Multi-Cultural Value Awareness of Language Models
Wenlong Zhao
Debanjan Mondal
Niket Tandon
Danica Dillion
Kurt Gray
Yuling Gu
VLM
37
11
0
25 Apr 2024
CultureBank: An Online Community-Driven Knowledge Base Towards
  Culturally Aware Language Technologies
CultureBank: An Online Community-Driven Knowledge Base Towards Culturally Aware Language Technologies
Weiyan Shi
Ryan Li
Yutong Zhang
Caleb Ziems
Chunhua yu
R. Horesh
Rogério Abreu de Paula
Diyi Yang
34
26
0
23 Apr 2024
Concept Induction: Analyzing Unstructured Text with High-Level Concepts
  Using LLooM
Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooM
Michelle S. Lam
Janice Teoh
James A. Landay
Jeffrey Heer
Michael S. Bernstein
35
43
0
18 Apr 2024
SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA
  of LLMs
SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs
Jaehyung Kim
Jaehyun Nam
Sangwoo Mo
Jongjin Park
Sang-Woo Lee
Minjoon Seo
Jung-Woo Ha
Jinwoo Shin
AIFin
RALM
ELM
40
35
0
17 Apr 2024
Look at the Text: Instruction-Tuned Language Models are More Robust
  Multiple Choice Selectors than You Think
Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Xinpeng Wang
Chengzhi Hu
Bolei Ma
Paul Röttger
Barbara Plank
OOD
34
6
0
12 Apr 2024
CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging
  LLMs' (Lack of) Multicultural Knowledge
CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge
Yu Ying Chiu
Amirhossein Ajalloeian
Maria Antoniak
Chan Young Park
Shuyue Stella Li
Mehar Bhatia
Sahithya Ravi
Yulia Tsvetkov
Vered Shwartz
Yejin Choi
44
20
0
10 Apr 2024
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
60
31
0
08 Apr 2024
Mapping the Increasing Use of LLMs in Scientific Papers
Mapping the Increasing Use of LLMs in Scientific Papers
Weixin Liang
Yaohui Zhang
Zhengxuan Wu
Haley Lepp
Wenlong Ji
...
Zhi Huang
Diyi Yang
Christopher Potts
Christopher D. Manning
James Y. Zou
AI4CE
DeLMO
44
60
0
01 Apr 2024
Secret Keepers: The Impact of LLMs on Linguistic Markers of Personal
  Traits
Secret Keepers: The Impact of LLMs on Linguistic Markers of Personal Traits
Zhivar Sourati
Meltem Ozcan
Colin McDaniel
Alireza S. Ziabari
Nuan Wen
Ala Nekouvaght Tak
Fred Morstatter
Morteza Dehghani
PILM
29
1
0
30 Mar 2024
Large Language Models Produce Responses Perceived to be Empathic
Large Language Models Produce Responses Perceived to be Empathic
Yoon Kyung Lee
Jina Suh
Hongli Zhan
Junyi Jessy Li
Desmond C. Ong
AI4MH
34
19
0
26 Mar 2024
The Strong Pull of Prior Knowledge in Large Language Models and Its
  Impact on Emotion Recognition
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
40
5
0
25 Mar 2024
Llama meets EU: Investigating the European Political Spectrum through
  the Lens of LLMs
Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs
Ilias Chalkidis
Stephanie Brandl
34
7
0
20 Mar 2024
Provable Multi-Party Reinforcement Learning with Diverse Human Feedback
Provable Multi-Party Reinforcement Learning with Diverse Human Feedback
Huiying Zhong
Zhun Deng
Weijie J. Su
Zhiwei Steven Wu
Linjun Zhang
52
13
0
08 Mar 2024
A Safe Harbor for AI Evaluation and Red Teaming
A Safe Harbor for AI Evaluation and Red Teaming
Shayne Longpre
Sayash Kapoor
Kevin Klyman
Ashwin Ramaswami
Rishi Bommasani
...
Daniel Kang
Sandy Pentland
Arvind Narayanan
Percy Liang
Peter Henderson
55
38
0
07 Mar 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
41
16
0
07 Mar 2024
Don't Blame the Data, Blame the Model: Understanding Noise and Bias When
  Learning from Subjective Annotations
Don't Blame the Data, Blame the Model: Understanding Noise and Bias When Learning from Subjective Annotations
Abhishek Anand
Negar Mokhberian
Prathyusha Naresh Kumar
Anweasha Saha
Zihao He
Ashwin Rao
Fred Morstatter
Kristina Lerman
39
6
0
06 Mar 2024
Human vs. Machine: Behavioral Differences Between Expert Humans and
  Language Models in Wargame Simulations
Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations
Max Lamparth
Anthony Corso
Jacob Ganz
O. Mastro
Jacquelyn G. Schneider
Harold Trinkunas
51
7
0
06 Mar 2024
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Muhammad Farid Adilazuarda
Sagnik Mukherjee
Pradhyumna Lavania
Siddhant Singh
Alham Fikri Aji
Jacki OÑeill
Ashutosh Modi
Monojit Choudhury
67
55
0
05 Mar 2024
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a
  Large Language Model Based on Group-Level Demographic Information
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a Large Language Model Based on Group-Level Demographic Information
Seungjong Sun
Eungu Lee
Dongyan Nan
Xiangying Zhao
Wonbyung Lee
Bernard J. Jansen
Jang Hyun Kim
56
17
0
28 Feb 2024
Beyond prompt brittleness: Evaluating the reliability and consistency of
  political worldviews in LLMs
Beyond prompt brittleness: Evaluating the reliability and consistency of political worldviews in LLMs
Tanise Ceron
Neele Falk
Ana Barić
Dmitry Nikolaev
Sebastian Padó
44
15
0
27 Feb 2024
Predict the Next Word: Humans exhibit uncertainty in this task and
  language models _____
Predict the Next Word: Humans exhibit uncertainty in this task and language models _____
Evgenia Ilia
Wilker Aziz
34
2
0
27 Feb 2024
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations
  for Values and Opinions in Large Language Models
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
Paul Röttger
Valentin Hofmann
Valentina Pyatkin
Musashi Hinck
Hannah Rose Kirk
Hinrich Schütze
Dirk Hovy
ELM
26
53
0
26 Feb 2024
Unintended Impacts of LLM Alignment on Global Representation
Unintended Impacts of LLM Alignment on Global Representation
Michael Joseph Ryan
William B. Held
Diyi Yang
45
41
0
22 Feb 2024
"My Answer is C": First-Token Probabilities Do Not Match Text Answers in
  Instruction-Tuned Language Models
"My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Xinpeng Wang
Bolei Ma
Chengzhi Hu
Leon Weber-Genzel
Paul Röttger
Frauke Kreuter
Dirk Hovy
Barbara Plank
19
41
0
22 Feb 2024
Eagle: Ethical Dataset Given from Real Interactions
Eagle: Ethical Dataset Given from Real Interactions
Masahiro Kaneko
Danushka Bollegala
Timothy Baldwin
42
3
0
22 Feb 2024
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common
  Knowledge
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common Knowledge
Jiyoung Lee
Minwoo Kim
Seungho Kim
Junghwan Kim
Seunghyun Won
Hwaran Lee
Edward Choi
ALM
32
11
0
21 Feb 2024
IMBUE: Improving Interpersonal Effectiveness through Simulation and
  Just-in-time Feedback with Human-Language Model Interaction
IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction
Inna Wanyin Lin
Ashish Sharma
Christopher Rytting
Adam S. Miner
Jina Suh
Tim Althoff
35
11
0
19 Feb 2024
Polarization of Autonomous Generative AI Agents Under Echo Chambers
Polarization of Autonomous Generative AI Agents Under Echo Chambers
Masaya Ohagi
LLMAG
25
7
0
19 Feb 2024
Stick to Your Role! Context-dependence and Stability of Personal Value
  Expression in Large Language Models
Stick to Your Role! Context-dependence and Stability of Personal Value Expression in Large Language Models
Grgur Kovač
Rémy Portelas
Masataka Sawayama
Peter Ford Dominey
Pierre-Yves Oudeyer
LLMAG
29
1
0
19 Feb 2024
How Susceptible are Large Language Models to Ideological Manipulation?
How Susceptible are Large Language Models to Ideological Manipulation?
Kai Chen
Zihao He
Jun Yan
Taiwei Shi
Kristina Lerman
40
10
0
18 Feb 2024
Whose Emotions and Moral Sentiments Do Language Models Reflect?
Whose Emotions and Moral Sentiments Do Language Models Reflect?
Zihao He
Siyi Guo
Ashwin Rao
Kristina Lerman
47
12
0
16 Feb 2024
Quantifying the Persona Effect in LLM Simulations
Quantifying the Persona Effect in LLM Simulations
Tiancheng Hu
Nigel Collier
33
52
0
16 Feb 2024
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large
  Language Models
I Am Not Them: Fluid Identities and Persistent Out-group Bias in Large Language Models
Wenchao Dong
Assem Zhunis
Hyojin Chin
Jiyoung Han
Meeyoung Cha
32
2
0
16 Feb 2024
Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement
Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement
Chenkai Sun
Ke Yang
R. Reddy
Yi R. Fung
Hou Pong Chan
Chengxiang Zhai
ChengXiang Zhai
Heng Ji
28
17
0
16 Feb 2024
Chain-of-Planned-Behaviour Workflow Elicits Few-Shot Mobility Generation
  in LLMs
Chain-of-Planned-Behaviour Workflow Elicits Few-Shot Mobility Generation in LLMs
Chenyang Shao
Fengli Xu
Bingbing Fan
Jingtao Ding
Yuan Yuan
Meng Wang
Yong Li
LRM
27
6
0
15 Feb 2024
(Ir)rationality and Cognitive Biases in Large Language Models
(Ir)rationality and Cognitive Biases in Large Language Models
Olivia Macmillan-Scott
Mirco Musolesi
LRM
52
18
0
14 Feb 2024
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with
  Diverse Human Preferences
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences
Souradip Chakraborty
Jiahao Qiu
Hui Yuan
Alec Koppel
Furong Huang
Dinesh Manocha
Amrit Singh Bedi
Mengdi Wang
ALM
43
48
0
14 Feb 2024
Assessing Generalization for Subpopulation Representative Modeling via
  In-Context Learning
Assessing Generalization for Subpopulation Representative Modeling via In-Context Learning
Gabriel Simmons
Vladislav Savinov
26
1
0
12 Feb 2024
Antagonistic AI
Antagonistic AI
Alice Cai
Ian Arawjo
Elena L. Glassman
43
3
0
12 Feb 2024
Personalized Language Modeling from Personalized Human Feedback
Personalized Language Modeling from Personalized Human Feedback
Xinyu Li
Zachary C. Lipton
Liu Leqi
ALM
71
48
0
06 Feb 2024
Bias in Opinion Summarisation from Pre-training to Adaptation: A Case
  Study in Political Bias
Bias in Opinion Summarisation from Pre-training to Adaptation: A Case Study in Political Bias
Nannan Huang
Haytham M. Fayek
Xiuzhen Zhang
26
1
0
01 Feb 2024
CroissantLLM: A Truly Bilingual French-English Language Model
CroissantLLM: A Truly Bilingual French-English Language Model
Manuel Faysse
Patrick Fernandes
Nuno M. Guerreiro
António Loison
Duarte M. Alves
...
François Yvon
André F.T. Martins
Gautier Viaud
C´eline Hudelot
Pierre Colombo
55
32
0
01 Feb 2024
Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion
  Related to Harms of Misinformation
Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation
Terrence Neumann
Sooyong Lee
Maria De-Arteaga
S. Fazelpour
Matthew Lease
42
4
0
29 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
WARM: On the Benefits of Weight Averaged Reward Models
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Ramé
Nino Vieillard
Léonard Hussenot
Robert Dadashi
Geoffrey Cideron
Olivier Bachem
Johan Ferret
120
94
0
22 Jan 2024
AI for social science and social science of AI: A Survey
AI for social science and social science of AI: A Survey
Ruoxi Xu
Yingfei Sun
Mengjie Ren
Shiguang Guo
Ruotong Pan
Hongyu Lin
Le Sun
Xianpei Han
66
46
0
22 Jan 2024
Bridging Cultural Nuances in Dialogue Agents through Cultural Value
  Surveys
Bridging Cultural Nuances in Dialogue Agents through Cultural Value Surveys
Yong Cao
Min Chen
Daniel Hershcovich
35
5
0
18 Jan 2024
Canvil: Designerly Adaptation for LLM-Powered User Experiences
Canvil: Designerly Adaptation for LLM-Powered User Experiences
K. J. Kevin Feng
Q. V. Liao
Ziang Xiao
Jennifer Wortman Vaughan
Amy X. Zhang
David W. McDonald
43
16
0
17 Jan 2024
Relying on the Unreliable: The Impact of Language Models' Reluctance to
  Express Uncertainty
Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty
Kaitlyn Zhou
Jena D. Hwang
Xiang Ren
Maarten Sap
36
54
0
12 Jan 2024
A Computational Framework for Behavioral Assessment of LLM Therapists
A Computational Framework for Behavioral Assessment of LLM Therapists
Yu Ying Chiu
Ashish Sharma
Inna Wanyin Lin
Tim Althoff
AI4MH
32
36
0
01 Jan 2024
Previous
123456
Next