ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20627
  4. Cited By
Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching

Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching

27 May 2025
Zhekun Shi
Kaizhao Liu
Qi Long
Weijie J. Su
Jiancong Xiao
ArXiv (abs)PDFHTML

Papers citing "Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching"

4 / 4 papers shown
Title
Theoretical Tensions in RLHF: Reconciling Empirical Success with Inconsistencies in Social Choice Theory
Theoretical Tensions in RLHF: Reconciling Empirical Success with Inconsistencies in Social Choice Theory
Jiancong Xiao
Zhekun Shi
Kaizhao Liu
Q. Long
Weijie J. Su
29
0
0
14 Jun 2025
Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach
Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach
Jiancong Xiao
Bojian Hou
Zhanliang Wang
Ruochen Jin
Q. Long
Weijie Su
Li Shen
104
2
0
04 May 2025
Statistical Impossibility and Possibility of Aligning LLMs with Human Preferences: From Condorcet Paradox to Nash Equilibrium
Kaizhao Liu
Qi Long
Zhekun Shi
Weijie J. Su
Jiancong Xiao
86
7
0
14 Mar 2025
Jackpot! Alignment as a Maximal Lottery
Jackpot! Alignment as a Maximal Lottery
Roberto-Rafael Maura-Rivero
Marc Lanctot
Francesco Visin
Kate Larson
123
7
0
31 Jan 2025
1