43
0

Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans?

Abstract

Framing in media critically shapes public perception by selectively emphasizing some details while downplaying others. With the rise of large language models in automated news and content creation, there is growing concern that these systems may introduce or even amplify framing biases compared to human authors. In this paper, we explore how framing manifests in both out-of-the-box and fine-tuned LLM-generated news content. Our analysis reveals that, particularly in politically and socially sensitive contexts, LLMs tend to exhibit more pronounced framing than their human counterparts. In addition, we observe significant variation in framing tendencies across different model architectures, with some models displaying notably higher biases. These findings point to the need for effective post-training mitigation strategies and tighter evaluation frameworks to ensure that automated news content upholds the standards of balanced reporting.

View on arXiv
@article{pastorino2025_2505.05406,
  title={ Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans? },
  author={ Valeria Pastorino and Nafise Sadat Moosavi },
  journal={arXiv preprint arXiv:2505.05406},
  year={ 2025 }
}
Comments on this paper