Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2501.19266
Cited By
Jackpot! Alignment as a Maximal Lottery
31 January 2025
Roberto-Rafael Maura-Rivero
Marc Lanctot
Francesco Visin
Kate Larson
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Jackpot! Alignment as a Maximal Lottery"
5 / 5 papers shown
Title
Theoretical Tensions in RLHF: Reconciling Empirical Success with Inconsistencies in Social Choice Theory
Jiancong Xiao
Zhekun Shi
Kaizhao Liu
Q. Long
Weijie J. Su
22
0
0
14 Jun 2025
Population-Proportional Preference Learning from Human Feedback: An Axiomatic Approach
Kihyun Kim
Jiawei Zhang
Asuman Ozdaglar
P. Parrilo
26
0
0
05 Jun 2025
Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?
Paul Gölz
Nika Haghtalab
Kunhe Yang
40
0
0
29 May 2025
Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching
Zhekun Shi
Kaizhao Liu
Qi Long
Weijie J. Su
Jiancong Xiao
35
2
0
27 May 2025
Soft Condorcet Optimization for Ranking of General Agents
Marc Lanctot
Kate Larson
Michael Kaisers
Quentin Berthet
I. Gemp
Manfred Diaz
Roberto-Rafael Maura-Rivero
Yoram Bachrach
Anna Koop
Doina Precup
270
0
0
31 Oct 2024
1