ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21267
47
0

ReaLJam: Real-Time Human-AI Music Jamming with Reinforcement Learning-Tuned Transformers

28 February 2025
Alexander Scarlatos
Yusong Wu
Ian Simon
Adam Roberts
Tim Cooijmans
Natasha Jaques
Cassie Tarakajian
Cheng-Zhi Anna Huang
ArXivPDFHTML
Abstract

Recent advances in generative artificial intelligence (AI) have created models capable of high-quality musical content generation. However, little consideration is given to how to use these models for real-time or cooperative jamming musical applications because of crucial required features: low latency, the ability to communicate planned actions, and the ability to adapt to user input in real-time. To support these needs, we introduce ReaLJam, an interface and protocol for live musical jamming sessions between a human and a Transformer-based AI agent trained with reinforcement learning. We enable real-time interactions using the concept of anticipation, where the agent continually predicts how the performance will unfold and visually conveys its plan to the user. We conduct a user study where experienced musicians jam in real-time with the agent through ReaLJam. Our results demonstrate that ReaLJam enables enjoyable and musically interesting sessions, and we uncover important takeaways for future work.

View on arXiv
@article{scarlatos2025_2502.21267,
  title={ ReaLJam: Real-Time Human-AI Music Jamming with Reinforcement Learning-Tuned Transformers },
  author={ Alexander Scarlatos and Yusong Wu and Ian Simon and Adam Roberts and Tim Cooijmans and Natasha Jaques and Cassie Tarakajian and Cheng-Zhi Anna Huang },
  journal={arXiv preprint arXiv:2502.21267},
  year={ 2025 }
}
Comments on this paper