78
0
v1v2 (latest)

STEER-BENCH: A Benchmark for Evaluating the Steerability of Large Language Models

Main:10 Pages
14 Figures
Bibliography:2 Pages
15 Tables
Appendix:18 Pages
Abstract

Steerability, or the ability of large language models (LLMs) to adapt outputs to align with diverse community-specific norms, perspectives, and communication styles, is critical for real-world applications but remains under-evaluated. We introduce Steer-Bench, a benchmark for assessing population-specific steering using contrasting Reddit communities. Covering 30 contrasting subreddit pairs across 19 domains, Steer-Bench includes over 10,000 instruction-response pairs and validated 5,500 multiple-choice question with corresponding silver labels to test alignment with diverse community norms. Our evaluation of 13 popular LLMs using Steer-Bench reveals that while human experts achieve an accuracy of 81% with silver labels, the best-performing models reach only around 65% accuracy depending on the domain and configuration. Some models lag behind human-level alignment by over 15 percentage points, highlighting significant gaps in community-sensitive steerability. Steer-Bench is a benchmark to systematically assess how effectively LLMs understand community-specific instructions, their resilience to adversarial steering attempts, and their ability to accurately represent diverse cultural and ideological perspectives.

View on arXiv
@article{chen2025_2505.20645,
  title={ STEER-BENCH: A Benchmark for Evaluating the Steerability of Large Language Models },
  author={ Kai Chen and Zihao He and Taiwei Shi and Kristina Lerman },
  journal={arXiv preprint arXiv:2505.20645},
  year={ 2025 }
}
Comments on this paper