ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.17548
  4. Cited By
Whose Opinions Do Language Models Reflect?

Whose Opinions Do Language Models Reflect?

30 March 2023
Shibani Santurkar
Esin Durmus
Faisal Ladhak
Cinoo Lee
Percy Liang
Tatsunori Hashimoto
ArXivPDFHTML

Papers citing "Whose Opinions Do Language Models Reflect?"

50 / 277 papers shown
Title
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Tongshuang Wu
Haiyi Zhu
Maya Albayrak
Alexis Axon
Amanda Bertsch
...
Ying-Jui Tseng
Patricia Vaidos
Zhijin Wu
Wei Wu
Chenyang Yang
86
31
0
10 Jan 2025
SubData: Bridging Heterogeneous Datasets to Enable Theory-Driven Evaluation of Political and Demographic Perspectives in LLMs
SubData: Bridging Heterogeneous Datasets to Enable Theory-Driven Evaluation of Political and Demographic Perspectives in LLMs
Leon Fröhling
Pietro Bernardelle
Gianluca Demartini
ALM
79
0
0
21 Dec 2024
A Rose by Any Other Name: LLM-Generated Explanations Are Good Proxies
  for Human Explanations to Collect Label Distributions on NLI
A Rose by Any Other Name: LLM-Generated Explanations Are Good Proxies for Human Explanations to Collect Label Distributions on NLI
Beiduo Chen
Siyao Peng
Anna Korhonen
Barbara Plank
74
0
0
18 Dec 2024
QUENCH: Measuring the gap between Indic and Non-Indic Contextual General
  Reasoning in LLMs
QUENCH: Measuring the gap between Indic and Non-Indic Contextual General Reasoning in LLMs
Mohammad Aflah Khan
Neemesh Yadav
Sarah Masud
Md. Shad Akhtar
74
0
0
16 Dec 2024
Beyond Dataset Creation: Critical View of Annotation Variation and Bias
  Probing of a Dataset for Online Radical Content Detection
Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection
Arij Riabi
Virginie Mouilleron
Menel Mahamdi
Wissam Antoun
Djamé Seddah
72
0
0
16 Dec 2024
Improving LLM Group Fairness on Tabular Data via In-Context Learning
Improving LLM Group Fairness on Tabular Data via In-Context Learning
Valeriia Cherepanova
Chia-Jung Lee
Nil-Jana Akpinar
Riccardo Fogliato
Martín Bertrán
Michael Kearns
James Zou
LMTD
73
0
0
05 Dec 2024
A dataset of questions on decision-theoretic reasoning in Newcomb-like
  problems
A dataset of questions on decision-theoretic reasoning in Newcomb-like problems
Caspar Oesterheld
Emery Cooper
Miles Kodama
Linh Chi Nguyen
Ethan Perez
39
1
0
15 Nov 2024
Understanding The Effect Of Temperature On Alignment With Human Opinions
Understanding The Effect Of Temperature On Alignment With Human Opinions
Maja Pavlovic
Massimo Poesio
28
1
0
15 Nov 2024
Contextualized Evaluations: Taking the Guesswork Out of Language Model
  Evaluations
Contextualized Evaluations: Taking the Guesswork Out of Language Model Evaluations
Chaitanya Malaviya
Joseph Chee Chang
Dan Roth
Mohit Iyyer
Mark Yatskar
Kyle Lo
ELM
45
4
0
11 Nov 2024
One fish, two fish, but not the whole sea: Alignment reduces language
  models' conceptual diversity
One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity
Sonia K. Murthy
Tomer Ullman
Jennifer Hu
ALM
43
11
0
07 Nov 2024
Summarization of Opinionated Political Documents with Varied
  Perspectives
Summarization of Opinionated Political Documents with Varied Perspectives
Nicholas Deas
Kathleen McKeown
26
0
0
06 Nov 2024
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Yujin Potter
Shiyang Lai
Junsol Kim
James Evans
D. Song
51
12
0
31 Oct 2024
Representative Social Choice: From Learning Theory to AI Alignment
Representative Social Choice: From Learning Theory to AI Alignment
Tianyi Qiu
FedML
43
2
0
31 Oct 2024
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection
  Attacks Detection
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection
M. Rahman
Fan Wu
A. Cuzzocrea
S. Ahamed
AAML
25
3
0
28 Oct 2024
Bias in the Mirror: Are LLMs opinions robust to their own adversarial
  attacks ?
Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks ?
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
AAML
29
0
0
17 Oct 2024
Conformity in Large Language Models
Conformity in Large Language Models
Xiaochen Zhu
Caiqi Zhang
Tom Stafford
Nigel Collier
Andreas Vlachos
51
0
0
16 Oct 2024
Preference Optimization with Multi-Sample Comparisons
Preference Optimization with Multi-Sample Comparisons
Chaoqi Wang
Zhuokai Zhao
Chen Zhu
Karthik Abinav Sankararaman
Michal Valko
...
Zhaorun Chen
Madian Khabsa
Yuxin Chen
Hao Ma
Sinong Wang
69
10
0
16 Oct 2024
Personas with Attitudes: Controlling LLMs for Diverse Data Annotation
Personas with Attitudes: Controlling LLMs for Diverse Data Annotation
Leon Fröhling
Gianluca Demartini
Dennis Assenmacher
29
5
0
15 Oct 2024
LVD-2M: A Long-take Video Dataset with Temporally Dense Captions
LVD-2M: A Long-take Video Dataset with Temporally Dense Captions
Tianwei Xiong
Yuqing Wang
Daquan Zhou
Zhijie Lin
Jiashi Feng
Xihui Liu
VGen
33
7
0
14 Oct 2024
MisinfoEval: Generative AI in the Era of "Alternative Facts"
MisinfoEval: Generative AI in the Era of "Alternative Facts"
Saadia Gabriel
Liang Lyu
James Siderius
Marzyeh Ghassemi
Jacob Andreas
Asu Ozdaglar
31
2
0
13 Oct 2024
Which Demographics do LLMs Default to During Annotation?
Which Demographics do LLMs Default to During Annotation?
Johannes Schäfer
Aidan Combs
Christopher Bagdon
Jiahui Li
Nadine Probol
...
Yarik Menchaca Resendiz
Aswathy Velutharambath
Amelie Wuhrl
Sabine Weber
Roman Klinger
35
2
0
11 Oct 2024
Intuitions of Compromise: Utilitarianism vs. Contractualism
Intuitions of Compromise: Utilitarianism vs. Contractualism
Jared Moore
Yejin Choi
Sydney Levine
33
0
0
07 Oct 2024
Can Language Models Reason about Individualistic Human Values and
  Preferences?
Can Language Models Reason about Individualistic Human Values and Preferences?
Liwei Jiang
Taylor Sorensen
Sydney Levine
Yejin Choi
36
7
0
04 Oct 2024
Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large
  Language Models with Assigned Visual Personas
Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large Language Models with Assigned Visual Personas
Seungjong Sun
Eungu Lee
Seo Yeon Baek
Seunghyun Hwang
Wonbyung Lee
Dongyan Nan
Bernard J. Jansen
Jang Hyun Kim
32
2
0
04 Oct 2024
Margin Matching Preference Optimization: Enhanced Model Alignment with
  Granular Feedback
Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback
Kyuyoung Kim
Ah Jeong Seo
Hao Liu
Jinwoo Shin
Kimin Lee
27
2
0
04 Oct 2024
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed
  Bandits
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed Bandits
Duy Nguyen
Archiki Prasad
Elias Stengel-Eskin
Joey Tianyi Zhou
23
3
0
02 Oct 2024
Examining the Role of Relationship Alignment in Large Language Models
Examining the Role of Relationship Alignment in Large Language Models
Kristen M. Altenburger
Hongda Jiang
Robert E. Kraut
Yi-Chia Wang
Jane Dwivedi-Yu
29
0
0
02 Oct 2024
Understanding How Psychological Distance Influences User Preferences in
  Conversational Versus Web Search
Understanding How Psychological Distance Influences User Preferences in Conversational Versus Web Search
Yitian Yang
Yugin Tan
Yang Chen Lin
Jung-Tai King
Zihan Liu
Yi-Chieh Lee
25
0
0
30 Sep 2024
'Simulacrum of Stories': Examining Large Language Models as Qualitative
  Research Participants
'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants
Shivani Kapania
William Agnew
Motahhare Eslami
Hoda Heidari
Sarah E Fox
42
4
0
28 Sep 2024
Open-World Evaluation for Retrieving Diverse Perspectives
Open-World Evaluation for Retrieving Diverse Perspectives
Hung-Ting Chen
Eunsol Choi
35
0
0
26 Sep 2024
LLM-Measure: Generating Valid, Consistent, and Reproducible Text-Based
  Measures for Social Science Research
LLM-Measure: Generating Valid, Consistent, and Reproducible Text-Based Measures for Social Science Research
Yi Yang
Hanyu Duan
Jiaxin Liu
Kar Yan Tam
21
0
0
19 Sep 2024
Measuring Human and AI Values Based on Generative Psychometrics with Large Language Models
Measuring Human and AI Values Based on Generative Psychometrics with Large Language Models
Haoran Ye
Yuhang Xie
Yuanyi Ren
Hanjun Fang
Xin Zhang
Guojie Song
LM&MA
37
1
0
18 Sep 2024
Estimating Wage Disparities Using Foundation Models
Estimating Wage Disparities Using Foundation Models
Keyon Vafa
Susan Athey
David M. Blei
80
1
0
15 Sep 2024
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen
Tiffany Knearem
Reshmi Ghosh
Yu-Ju Yang
Tanushree Mitra
Yun Huang
Yun Huang
61
0
0
15 Sep 2024
On the Relationship between Truth and Political Bias in Language Models
On the Relationship between Truth and Political Bias in Language Models
S. Fulay
William Brannon
Shrestha Mohanty
Cassandra Overney
Elinor Poole-Dayan
Deb Roy
Jad Kabbara
HILM
26
1
0
09 Sep 2024
Programming Refusal with Conditional Activation Steering
Programming Refusal with Conditional Activation Steering
Bruce W. Lee
Inkit Padhi
K. Ramamurthy
Erik Miehling
Pierre L. Dognin
Manish Nagireddy
Amit Dhurandhar
LLMSV
105
14
0
06 Sep 2024
Self-Alignment: Improving Alignment of Cultural Values in LLMs via
  In-Context Learning
Self-Alignment: Improving Alignment of Cultural Values in LLMs via In-Context Learning
Rochelle Choenni
Ekaterina Shutova
44
6
0
29 Aug 2024
United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections
United in Diversity? Contextual Biases in LLM-Based Predictions of the 2024 European Parliament Elections
Leah von der Heyde
Anna Haensch
Alexander Wenz
Bolei Ma
63
2
0
29 Aug 2024
LLMs generate structurally realistic social networks but overestimate political homophily
LLMs generate structurally realistic social networks but overestimate political homophily
Serina Chang
Alicja Chaszczewicz
Emma Wang
Maya Josifovska
Emma Pierson
J. Leskovec
44
6
0
29 Aug 2024
Constraining Participation: Affordances of Feedback Features in
  Interfaces to Large Language Models
Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models
Ned Cooper
Alexandra Zafiroglu
42
0
0
27 Aug 2024
How will advanced AI systems impact democracy?
How will advanced AI systems impact democracy?
Christopher Summerfield
Lisa Argyle
Michiel Bakker
Teddy Collins
Esin Durmus
...
Elizabeth Seger
Divya Siddarth
Henrik Skaug Sætra
MH Tessler
M. Botvinick
45
2
0
27 Aug 2024
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Kristina Gligorić
Tijana Zrnic
Cinoo Lee
Emmanuel J. Candès
Dan Jurafsky
72
5
0
27 Aug 2024
Interactive DualChecker for Mitigating Hallucinations in Distilling
  Large Language Models
Interactive DualChecker for Mitigating Hallucinations in Distilling Large Language Models
Meiyun Wang
Masahiro Suzuki
Hiroki Sakaji
Kiyoshi Izumi
VLM
41
0
0
22 Aug 2024
Improving and Assessing the Fidelity of Large Language Models Alignment to Online Communities
Improving and Assessing the Fidelity of Large Language Models Alignment to Online Communities
Minh Duc Hoang Chu
Zihao He
Rebecca Dorn
Kristina Lerman
48
2
0
18 Aug 2024
The Future of Open Human Feedback
The Future of Open Human Feedback
Shachar Don-Yehiya
Ben Burtenshaw
Ramon Fernandez Astudillo
Cailean Osborne
Mimansa Jaiswal
...
Omri Abend
Jennifer Ding
Sara Hooker
Hannah Rose Kirk
Leshem Choshen
VLM
ALM
62
4
0
15 Aug 2024
Evaluating Cultural Adaptability of a Large Language Model via
  Simulation of Synthetic Personas
Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic Personas
Louis Kwok
Michal Bravansky
Lewis D. Griffin
42
11
0
13 Aug 2024
GPT-4 Emulates Average-Human Emotional Cognition from a Third-Person
  Perspective
GPT-4 Emulates Average-Human Emotional Cognition from a Third-Person Perspective
Ala Nekouvaght Tak
Jonathan Gratch
24
7
0
11 Aug 2024
Examining the Behavior of LLM Architectures Within the Framework of
  Standardized National Exams in Brazil
Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil
Marcelo Sartori Locatelli
Matheus Prado Miranda
Igor Joaquim da Silva Costa
Matheus Torres Prates
Victor Thomé
...
Tomas Lacerda
Adriana Pagano
Eduardo Rios Neto
Wagner Meira Jr.
Virgílio A. F. Almeida
ELM
56
1
0
09 Aug 2024
Are Social Sentiments Inherent in LLMs? An Empirical Study on Extraction
  of Inter-demographic Sentiments
Are Social Sentiments Inherent in LLMs? An Empirical Study on Extraction of Inter-demographic Sentiments
Kunitomo Tanaka
Ryohei Sasano
Koichi Takeda
33
0
0
08 Aug 2024
GermanPartiesQA: Benchmarking Commercial Large Language Models for
  Political Bias and Sycophancy
GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy
Jan Batzner
Volker Stocker
Stefan Schmid
Gjergji Kasneci
23
1
0
25 Jul 2024
Previous
123456
Next