ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19776
29
0

Analyzing Political Bias in LLMs via Target-Oriented Sentiment Classification

26 May 2025
Akram Elbouanani
Evan Dufraisse
Adrian Popescu
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:3 Pages
12 Tables
Appendix:19 Pages
Abstract

Political biases encoded by LLMs might have detrimental effects on downstream applications. Existing bias analysis methods rely on small-size intermediate tasks (questionnaire answering or political content generation) and rely on the LLMs themselves for analysis, thus propagating bias. We propose a new approach leveraging the observation that LLM sentiment predictions vary with the target entity in the same sentence. We define an entropy-based inconsistency metric to encode this prediction variability. We insert 1319 demographically and politically diverse politician names in 450 political sentences and predict target-oriented sentiment using seven models in six widely spoken languages. We observe inconsistencies in all tested combinations and aggregate them in a statistically robust analysis at different granularity levels. We observe positive and negative bias toward left and far-right politicians and positive correlations between politicians with similar alignment. Bias intensity is higher for Western languages than for others. Larger models exhibit stronger and more consistent biases and reduce discrepancies between similar languages. We partially mitigate LLM unreliability in target-oriented sentiment classification (TSC) by replacing politician names with fictional but plausible counterparts.

View on arXiv
@article{elbouanani2025_2505.19776,
  title={ Analyzing Political Bias in LLMs via Target-Oriented Sentiment Classification },
  author={ Akram Elbouanani and Evan Dufraisse and Adrian Popescu },
  journal={arXiv preprint arXiv:2505.19776},
  year={ 2025 }
}
Comments on this paper