ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.09724
73
1
v1v2 (latest)

Navigating the Social Welfare Frontier: Portfolios for Multi-objective Reinforcement Learning

17 February 2025
Cheol Woo Kim
Jai Moondra
Shresth Verma
Madeleine Pollack
Lingkai Kong
Milind Tambe
Swati Gupta
ArXiv (abs)PDFHTML
Main:8 Pages
4 Figures
Bibliography:4 Pages
5 Tables
Appendix:11 Pages
Abstract

In many real-world applications of reinforcement learning (RL), deployed policies have varied impacts on different stakeholders, creating challenges in reaching consensus on how to effectively aggregate their preferences. Generalized ppp-means form a widely used class of social welfare functions for this purpose, with broad applications in fair resource allocation, AI alignment, and decision-making. This class includes well-known welfare functions such as Egalitarian, Nash, and Utilitarian welfare. However, selecting the appropriate social welfare function is challenging for decision-makers, as the structure and outcomes of optimal policies can be highly sensitive to the choice of ppp. To address this challenge, we study the concept of an α\alphaα-approximate portfolio in RL, a set of policies that are approximately optimal across the family of generalized ppp-means for all p∈[−∞,1]p \in [-\infty, 1]p∈[−∞,1]. We propose algorithms to compute such portfolios and provide theoretical guarantees on the trade-offs among approximation factor, portfolio size, and computational efficiency. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of our approach in summarizing the policy space induced by varying ppp values, empowering decision-makers to navigate this landscape more effectively.

View on arXiv
Comments on this paper