Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.18932
Cited By
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
27 March 2024
Yejin Bang
Delong Chen
Nayeon Lee
Pascale Fung
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring Political Bias in Large Language Models: What Is Said and How It Is Said"
21 / 21 papers shown
Title
Probing the Subtle Ideological Manipulation of Large Language Models
Demetris Paschalides
G. Pallis
M. Dikaiakos
34
0
0
19 Apr 2025
Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets
Molly Kennedy
Ayyoob Imani
Timo Spinde
Hinrich Schütze
55
1
0
20 Mar 2025
Linear Representations of Political Perspective Emerge in Large Language Models
Junsol Kim
James Evans
Aaron Schein
77
2
0
03 Mar 2025
A Cautionary Tale About "Neutrally" Informative AI Tools Ahead of the 2025 Federal Elections in Germany
Ina Dormuth
Sven Franke
Marlies Hafer
Tim Katzke
Alexander Marx
Emmanuel Müller
Daniel Neider
Markus Pauly
Jérôme Rutinowski
57
0
0
21 Feb 2025
Hope vs. Hate: Understanding User Interactions with LGBTQ+ News Content in Mainstream US News Media through the Lens of Hope Speech
Jonathan Pofcher
Christopher Homan
Randall Sell
Ashiqur R. KhudaBukhsh
96
0
0
13 Feb 2025
Unmasking Conversational Bias in AI Multiagent Systems
Simone Mungari
Giuseppe Manco
Luca Maria Aiello
LLMAG
56
0
0
24 Jan 2025
Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models
Kaiqi Yang
Hang Li
Yucheng Chu
Hang Li
Tai-Quan Peng
Yuping Lin
Hui Liu
85
0
0
21 Dec 2024
MGM: Global Understanding of Audience Overlap Graphs for Predicting the Factuality and the Bias of News Media
Muhammad Arslan Manzoor
Ruihong Zeng
Dilshod Azizov
Preslav Nakov
Shangsong Liang
79
0
0
12 Dec 2024
PRISM: A Methodology for Auditing Biases in Large Language Models
Leif Azzopardi
Yashar Moshfeghi
31
0
0
24 Oct 2024
Large Language Models Engineer Too Many Simple Features For Tabular Data
Jaris Küken
Lennart Purucker
Frank Hutter
194
2
0
23 Oct 2024
ComPO: Community Preferences for Language Model Personalization
Sachin Kumar
Chan Young Park
Yulia Tsvetkov
Noah A. Smith
Hannaneh Hajishirzi
37
5
0
21 Oct 2024
Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks ?
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
AAML
29
0
0
17 Oct 2024
AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment
Nuo Chen
Jiqun Liu
Xiaoyu Dong
Qijiong Liu
Tetsuya Sakai
Xiao-Ming Wu
32
10
0
24 Sep 2024
On the Relationship between Truth and Political Bias in Language Models
S. Fulay
William Brannon
Shrestha Mohanty
Cassandra Overney
Elinor Poole-Dayan
Deb Roy
Jad Kabbara
HILM
29
2
0
09 Sep 2024
Revealing Fine-Grained Values and Opinions in Large Language Models
Dustin Wright
Arnav Arora
Nadav Borenstein
Srishti Yadav
Serge J. Belongie
Isabelle Augenstein
41
1
0
27 Jun 2024
Aligning Large Language Models with Diverse Political Viewpoints
Dominik Stammbach
Philine Widmer
Eunjung Cho
Çağlar Gülçehre
Elliott Ash
45
3
0
20 Jun 2024
The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
Bolei Ma
Xinpeng Wang
Tiancheng Hu
Anna Haensch
Michael A. Hedderich
Barbara Plank
Frauke Kreuter
ALM
37
2
0
16 Jun 2024
The Political Preferences of LLMs
David Rozado
38
36
0
02 Feb 2024
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
77
85
0
14 Oct 2022
Neural Media Bias Detection Using Distant Supervision With BABE -- Bias Annotations By Experts
Timo Spinde
Manuel Plank
Jan-David Krieger
Terry Ruas
Bela Gipp
Akiko Aizawa
27
68
0
29 Sep 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
1