Think Again! The Effect of Test-Time Compute on Preferences, Opinions, and Beliefs of Large Language Models

As Large Language Models (LLMs) become deeply integrated into human life and increasingly influence decision-making, it's crucial to evaluate whether and to what extent they exhibit subjective preferences, opinions, and beliefs. These tendencies may stem from biases within the models, which may shape their behavior, influence the advice and recommendations they offer to users, and potentially reinforce certain viewpoints. This paper presents the Preference, Opinion, and Belief survey (POBs), a benchmark developed to assess LLMs' subjective inclinations across societal, cultural, ethical, and personal domains. We applied our benchmark to evaluate leading open- and closed-source LLMs, measuring desired properties such as reliability, neutrality, and consistency. In addition, we investigated the effect of increasing the test-time compute, through reasoning and self-reflection mechanisms, on those metrics. While effective in other tasks, our results show that these mechanisms offer only limited gains in our domain. Furthermore, we reveal that newer model versions are becoming less consistent and more biased toward specific viewpoints, highlighting a blind spot and a concerning trend. POBS:this https URL
View on arXiv@article{kour2025_2505.19621, title={ Think Again! The Effect of Test-Time Compute on Preferences, Opinions, and Beliefs of Large Language Models }, author={ George Kour and Itay Nakash and Ateret Anaby-Tavor and Michal Shmueli-Scheuer }, journal={arXiv preprint arXiv:2505.19621}, year={ 2025 } }