Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.20627
Cited By
Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching
27 May 2025
Zhekun Shi
Kaizhao Liu
Qi Long
Weijie J. Su
Jiancong Xiao
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching"
4 / 4 papers shown
Title
Theoretical Tensions in RLHF: Reconciling Empirical Success with Inconsistencies in Social Choice Theory
Jiancong Xiao
Zhekun Shi
Kaizhao Liu
Q. Long
Weijie J. Su
29
0
0
14 Jun 2025
Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach
Jiancong Xiao
Bojian Hou
Zhanliang Wang
Ruochen Jin
Q. Long
Weijie Su
Li Shen
104
2
0
04 May 2025
Statistical Impossibility and Possibility of Aligning LLMs with Human Preferences: From Condorcet Paradox to Nash Equilibrium
Kaizhao Liu
Qi Long
Zhekun Shi
Weijie J. Su
Jiancong Xiao
86
7
0
14 Mar 2025
Jackpot! Alignment as a Maximal Lottery
Roberto-Rafael Maura-Rivero
Marc Lanctot
Francesco Visin
Kate Larson
123
7
0
31 Jan 2025
1