ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05355
32
1

Multi-User Reinforcement Learning with Low Rank Rewards

11 October 2022
Naman Agarwal
Prateek Jain
S. Kowshik
Dheeraj M. Nagaraj
Praneeth Netrapalli
    OffRL
ArXivPDFHTML
Abstract

In this work, we consider the problem of collaborative multi-user reinforcement learning. In this setting there are multiple users with the same state-action space and transition probabilities but with different rewards. Under the assumption that the reward matrix of the NNN users has a low-rank structure -- a standard and practically successful assumption in the offline collaborative filtering setting -- the question is can we design algorithms with significantly lower sample complexity compared to the ones that learn the MDP individually for each user. Our main contribution is an algorithm which explores rewards collaboratively with NNN user-specific MDPs and can learn rewards efficiently in two key settings: tabular MDPs and linear MDPs. When NNN is large and the rank is constant, the sample complexity per MDP depends logarithmically over the size of the state-space, which represents an exponential reduction (in the state-space size) when compared to the standard ``non-collaborative'' algorithms.

View on arXiv
Comments on this paper