26
9

Large Language Models are Biased Because They Are Large Language Models

Philip Resnik
Abstract

This position paper's primary goal is to provoke thoughtful discussion about the relationship between bias and fundamental properties of large language models. I do this by seeking to convince the reader that harmful biases are an inevitable consequence arising from the design of any large language model as LLMs are currently formulated. To the extent that this is true, it suggests that the problem of harmful bias cannot be properly addressed without a serious reconsideration of AI driven by LLMs, going back to the foundational assumptions underlying their design.

View on arXiv
@article{resnik2025_2406.13138,
  title={ Large Language Models are Biased Because They Are Large Language Models },
  author={ Philip Resnik },
  journal={arXiv preprint arXiv:2406.13138},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.