ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.08283
  4. Cited By
From Pretraining Data to Language Models to Downstream Tasks: Tracking
  the Trails of Political Biases Leading to Unfair NLP Models

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

15 May 2023
Shangbin Feng
Chan Young Park
Yuhan Liu
Yulia Tsvetkov
ArXivPDFHTML

Papers citing "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models"

50 / 142 papers shown
Title
Do Words Reflect Beliefs? Evaluating Belief Depth in Large Language Models
Do Words Reflect Beliefs? Evaluating Belief Depth in Large Language Models
Shariar Kabir
Kevin Esterling
Yue Dong
32
0
0
23 Apr 2025
Biased by Design: Leveraging AI Biases to Enhance Critical Thinking of News Readers
Biased by Design: Leveraging AI Biases to Enhance Critical Thinking of News Readers
L. Zavolokina
Kilian Sprenkamp
Zoya Katashinskaya
Daniel Gordon Jones
41
0
0
20 Apr 2025
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Takuma Udagawa
Yang Zhao
H. Kanayama
Bishwaranjan Bhattacharjee
31
0
0
19 Apr 2025
Benchmarking Multi-National Value Alignment for Large Language Models
Benchmarking Multi-National Value Alignment for Large Language Models
Chengyi Ju
Weijie Shi
Chengzhong Liu
Yalan Qin
Jipeng Zhang
...
Jia Zhu
Jiajie Xu
Yaodong Yang
Sirui Han
Yike Guo
137
0
0
17 Apr 2025
Only a Little to the Left: A Theory-grounded Measure of Political Bias in Large Language Models
Only a Little to the Left: A Theory-grounded Measure of Political Bias in Large Language Models
Mats Faulborn
Indira Sen
Max Pellert
Andreas Spitz
David Garcia
ELM
45
0
0
20 Mar 2025
LLM Generated Persona is a Promise with a Catch
LLM Generated Persona is a Promise with a Catch
Ang Li
Haozhe Chen
Hongseok Namkoong
Tianyi Peng
57
1
0
18 Mar 2025
Agent-Enhanced Large Language Models for Researching Political Institutions
Agent-Enhanced Large Language Models for Researching Political Institutions
Joseph R. Loffredo
Suyeol Yun
LLMAG
77
0
0
14 Mar 2025
Data Caricatures: On the Representation of African American Language in Pretraining Corpora
Nicholas Deas
Blake Vente
Amith Ananthram
Jessica A. Grieser
D. Patton
Shana Kleiner
James Shepard
Kathleen McKeown
43
0
0
13 Mar 2025
Adaptive political surveys and GPT-4: Tackling the cold start problem with simulated user interactions
Fynn Bachmann
Daan van der Weijden
Lucien Heitz
Cristina Sarasua
Abraham Bernstein
SyDa
76
0
0
12 Mar 2025
AI-Facilitated Collective Judgements
Manon Revel
Théophile Pénigaud
56
0
0
06 Mar 2025
Linear Representations of Political Perspective Emerge in Large Language Models
Linear Representations of Political Perspective Emerge in Large Language Models
Junsol Kim
James Evans
Aaron Schein
77
2
0
03 Mar 2025
An Empirical Analysis of LLMs for Countering Misinformation
A. Proma
Neeley Pate
James Druckman
Gourab Ghoshal
Hangfeng He
Ehsan Hoque
39
0
0
28 Feb 2025
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Shanshan Xu
T. Y. S. S. Santosh
Yanai Elazar
Quirin Vogel
Barbara Plank
Matthias Grabmair
AILaw
83
0
0
25 Feb 2025
Language Model Fine-Tuning on Scaled Survey Data for Predicting Distributions of Public Opinions
Language Model Fine-Tuning on Scaled Survey Data for Predicting Distributions of Public Opinions
Joseph Suh
Erfan Jahanparast
Suhong Moon
Minwoo Kang
Serina Chang
ALM
LM&MA
55
1
0
24 Feb 2025
Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
Yue Xu
Chengyan Fu
Li Xiong
Sibei Yang
Wenjie Wang
47
0
0
17 Feb 2025
Hope vs. Hate: Understanding User Interactions with LGBTQ+ News Content in Mainstream US News Media through the Lens of Hope Speech
Hope vs. Hate: Understanding User Interactions with LGBTQ+ News Content in Mainstream US News Media through the Lens of Hope Speech
Jonathan Pofcher
Christopher Homan
Randall Sell
Ashiqur R. KhudaBukhsh
96
0
0
13 Feb 2025
Implicit Communication of Contextual Information in Human-Robot Collaboration
Yan Zhang
34
0
0
09 Feb 2025
The Impact of Persona-based Political Perspectives on Hateful Content Detection
The Impact of Persona-based Political Perspectives on Hateful Content Detection
Stefano Civelli
Pietro Bernardelle
Gianluca Demartini
59
0
0
01 Feb 2025
Scopes of Alignment
Scopes of Alignment
Kush R. Varshney
Zahra Ashktorab
Djallel Bouneffouf
Matthew D Riemer
Justin D. Weisz
36
0
0
15 Jan 2025
Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models
Beyond Partisan Leaning: A Comparative Analysis of Political Bias in Large Language Models
Kaiqi Yang
Hang Li
Yucheng Chu
Hang Li
Tai-Quan Peng
Yuping Lin
Hui Liu
85
0
0
21 Dec 2024
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic
  Approach
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic Approach
Daiki Shirafuji
Makoto Takenaka
Shinya Taguchi
LLMAG
72
0
0
16 Dec 2024
Using Machine Learning to Distinguish Human-written from
  Machine-generated Creative Fiction
Using Machine Learning to Distinguish Human-written from Machine-generated Creative Fiction
Andrea Cristina McGlinchey
Peter J Barclay
DeLMO
78
0
0
15 Dec 2024
How far can bias go? -- Tracing bias from pretraining data to alignment
How far can bias go? -- Tracing bias from pretraining data to alignment
Marion Thaler
Abdullatif Köksal
Alina Leidinger
Anna Korhonen
Hinrich Schutze
71
0
0
28 Nov 2024
A dataset of questions on decision-theoretic reasoning in Newcomb-like
  problems
A dataset of questions on decision-theoretic reasoning in Newcomb-like problems
Caspar Oesterheld
Emery Cooper
Miles Kodama
Linh Chi Nguyen
Ethan Perez
36
1
0
15 Nov 2024
Summarization of Opinionated Political Documents with Varied
  Perspectives
Summarization of Opinionated Political Documents with Varied Perspectives
Nicholas Deas
Kathleen McKeown
21
0
0
06 Nov 2024
LLM Generated Distribution-Based Prediction of US Electoral Results,
  Part I
LLM Generated Distribution-Based Prediction of US Electoral Results, Part I
Caleb Bradshaw
Caelen Miller
Sean Warnick
41
0
0
05 Nov 2024
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters
Yujin Potter
Shiyang Lai
Junsol Kim
James Evans
D. Song
48
12
0
31 Oct 2024
PRISM: A Methodology for Auditing Biases in Large Language Models
PRISM: A Methodology for Auditing Biases in Large Language Models
Leif Azzopardi
Yashar Moshfeghi
29
0
0
24 Oct 2024
ComPO: Community Preferences for Language Model Personalization
ComPO: Community Preferences for Language Model Personalization
Sachin Kumar
Chan Young Park
Yulia Tsvetkov
Noah A. Smith
Hannaneh Hajishirzi
31
5
0
21 Oct 2024
Montessori-Instruct: Generate Influential Training Data Tailored for
  Student Learning
Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning
Xiaochuan Li
Zichun Yu
Chenyan Xiong
SyDa
33
1
0
18 Oct 2024
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language
  Models
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language Models
Eddie L. Ungless
Nikolas Vitsakis
Zeerak Talat
James Garforth
Bjorn Ross
Arno Onken
Atoosa Kasirzadeh
Alexandra Birch
30
1
0
17 Oct 2024
Bias in the Mirror: Are LLMs opinions robust to their own adversarial
  attacks ?
Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks ?
Virgile Rennard
Christos Xypolopoulos
Michalis Vazirgiannis
AAML
29
0
0
17 Oct 2024
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
Iain Xie Weissburg
Sathvika Anand
Sharon Levy
Haewon Jeong
65
2
0
17 Oct 2024
Boosting Logical Fallacy Reasoning in LLMs via Logical Structure Tree
Boosting Logical Fallacy Reasoning in LLMs via Logical Structure Tree
Yuanyuan Lei
Ruihong Huang
21
1
0
15 Oct 2024
Measuring Spiritual Values and Bias of Large Language Models
Measuring Spiritual Values and Bias of Large Language Models
Songyuan Liu
Ziyang Zhang
Runze Yan
Wei Wu
Carl Yang
Jiaying Lu
24
0
0
15 Oct 2024
Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm
  Intelligence
Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Shangbin Feng
Zifeng Wang
Yike Wang
Sayna Ebrahimi
Hamid Palangi
...
Nathalie Rauschmayr
Yejin Choi
Yulia Tsvetkov
Chen-Yu Lee
Tomas Pfister
MoMe
35
3
0
15 Oct 2024
Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only
Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only
Jihan Yao
Wenxuan Ding
Shangbin Feng
Lucy Lu Wang
Yulia Tsvetkov
32
0
0
14 Oct 2024
When Neutral Summaries are not that Neutral: Quantifying Political
  Neutrality in LLM-Generated News Summaries
When Neutral Summaries are not that Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries
Supriti Vijay
Aman Priyanshu
Ashique R. KhudaBukhsh
30
1
0
13 Oct 2024
Which Demographics do LLMs Default to During Annotation?
Which Demographics do LLMs Default to During Annotation?
Johannes Schäfer
Aidan Combs
Christopher Bagdon
Jiahui Li
Nadine Probol
...
Yarik Menchaca Resendiz
Aswathy Velutharambath
Amelie Wuhrl
Sabine Weber
Roman Klinger
35
2
0
11 Oct 2024
Detecting Training Data of Large Language Models via Expectation Maximization
Detecting Training Data of Large Language Models via Expectation Maximization
Gyuwan Kim
Yang Li
Evangelia Spiliopoulou
Jie Ma
Miguel Ballesteros
William Yang Wang
MIALM
95
4
2
10 Oct 2024
Human Interest or Conflict? Leveraging LLMs for Automated Framing
  Analysis in TV Shows
Human Interest or Conflict? Leveraging LLMs for Automated Framing Analysis in TV Shows
David Alonso del Barrio
Max Tiel
D. Gática-Pérez
38
3
0
19 Sep 2024
Challenging Fairness: A Comprehensive Exploration of Bias in LLM-Based
  Recommendations
Challenging Fairness: A Comprehensive Exploration of Bias in LLM-Based Recommendations
Shahnewaz Karim Sakib
Anindya Bijoy Das
31
0
0
17 Sep 2024
Doppelgänger's Watch: A Split Objective Approach to Large Language
  Models
Doppelgänger's Watch: A Split Objective Approach to Large Language Models
S. Ghasemlou
Ashish Katiyar
Aparajita Saraf
Seungwhan Moon
Mangesh Pujari
Pinar E. Donmez
Babak Damavandi
Anuj Kumar
44
0
0
09 Sep 2024
On the Relationship between Truth and Political Bias in Language Models
On the Relationship between Truth and Political Bias in Language Models
S. Fulay
William Brannon
Shrestha Mohanty
Cassandra Overney
Elinor Poole-Dayan
Deb Roy
Jad Kabbara
HILM
24
1
0
09 Sep 2024
Examining the Behavior of LLM Architectures Within the Framework of
  Standardized National Exams in Brazil
Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil
Marcelo Sartori Locatelli
Matheus Prado Miranda
Igor Joaquim da Silva Costa
Matheus Torres Prates
Victor Thomé
...
Tomas Lacerda
Adriana Pagano
Eduardo Rios Neto
Wagner Meira Jr.
Virgílio A. F. Almeida
ELM
54
1
0
09 Aug 2024
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine
  Studies
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies
Alina Leidinger
Richard Rogers
34
5
0
16 Jul 2024
Investigating LLMs as Voting Assistants via Contextual Augmentation: A
  Case Study on the European Parliament Elections 2024
Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024
Ilias Chalkidis
32
2
0
11 Jul 2024
Revealing Fine-Grained Values and Opinions in Large Language Models
Revealing Fine-Grained Values and Opinions in Large Language Models
Dustin Wright
Arnav Arora
Nadav Borenstein
Srishti Yadav
Serge J. Belongie
Isabelle Augenstein
41
1
0
27 Jun 2024
FernUni LLM Experimental Infrastructure (FLEXI) -- Enabling
  Experimentation and Innovation in Higher Education Through Access to Open
  Large Language Models
FernUni LLM Experimental Infrastructure (FLEXI) -- Enabling Experimentation and Innovation in Higher Education Through Access to Open Large Language Models
Torsten Zesch
Michael Hanses
Niels Seidel
Piush Aggarwal
Dirk Veiel
Claudia de Witt
23
0
0
27 Jun 2024
The FineWeb Datasets: Decanting the Web for the Finest Text Data at
  Scale
The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale
Guilherme Penedo
Hynek Kydlícek
Loubna Ben Allal
Anton Lozhkov
Margaret Mitchell
Colin Raffel
Leandro von Werra
Thomas Wolf
48
191
0
25 Jun 2024
123
Next