ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03885
52
0

Seldonian Reinforcement Learning for Ad Hoc Teamwork

5 March 2025
Edoardo Zorzi
A. Castellini
Leonidas Bakopoulos
Georgios Chalkiadakis
Alessandro Farinelli
    OffRL
ArXivPDFHTML
Abstract

Most offline RL algorithms return optimal policies but do not provide statistical guarantees on undesirable behaviors. This could generate reliability issues in safety-critical applications, such as in some multiagent domains where agents, and possibly humans, need to interact to reach their goals without harming each other. In this work, we propose a novel offline RL approach, inspired by Seldonian optimization, which returns policies with good performance and statistically guaranteed properties with respect to predefined undesirable behaviors. In particular, our focus is on Ad Hoc Teamwork settings, where agents must collaborate with new teammates without prior coordination. Our method requires only a pre-collected dataset, a set of candidate policies for our agent, and a specification about the possible policies followed by the other players -- it does not require further interactions, training, or assumptions on the type and architecture of the policies. We test our algorithm in Ad Hoc Teamwork problems and show that it consistently finds reliable policies while improving sample efficiency with respect to standard ML baselines.

View on arXiv
@article{zorzi2025_2503.03885,
  title={ Seldonian Reinforcement Learning for Ad Hoc Teamwork },
  author={ Edoardo Zorzi and Alberto Castellini and Leonidas Bakopoulos and Georgios Chalkiadakis and Alessandro Farinelli },
  journal={arXiv preprint arXiv:2503.03885},
  year={ 2025 }
}
Comments on this paper