ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16754
51
0

PyTupli: A Scalable Infrastructure for Collaborative Offline Reinforcement Learning Projects

22 May 2025
Hannah Markgraf
Michael Eichelbeck
Daria Cappey
Selin Demirtürk
Yara Schattschneider
Matthias Althoff
    OffRL
ArXivPDFHTML
Abstract

Offline reinforcement learning (RL) has gained traction as a powerful paradigm for learning control policies from pre-collected data, eliminating the need for costly or risky online interactions. While many open-source libraries offer robust implementations of offline RL algorithms, they all rely on datasets composed of experience tuples consisting of state, action, next state, and reward. Managing, curating, and distributing such datasets requires suitable infrastructure. Although static datasets exist for established benchmark problems, no standardized or scalable solution supports developing and sharing datasets for novel or user-defined benchmarks. To address this gap, we introduce PyTupli, a Python-based tool to streamline the creation, storage, and dissemination of benchmark environments and their corresponding tuple datasets. PyTupli includes a lightweight client library with defined interfaces for uploading and retrieving benchmarks and data. It supports fine-grained filtering at both the episode and tuple level, allowing researchers to curate high-quality, task-specific datasets. A containerized server component enables production-ready deployment with authentication, access control, and automated certificate provisioning for secure use. By addressing key barriers in dataset infrastructure, PyTupli facilitates more collaborative, reproducible, and scalable offline RL research.

View on arXiv
@article{markgraf2025_2505.16754,
  title={ PyTupli: A Scalable Infrastructure for Collaborative Offline Reinforcement Learning Projects },
  author={ Hannah Markgraf and Michael Eichelbeck and Daria Cappey and Selin Demirtürk and Yara Schattschneider and Matthias Althoff },
  journal={arXiv preprint arXiv:2505.16754},
  year={ 2025 }
}
Comments on this paper