ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20038
38
0

Towards Video to Piano Music Generation with Chain-of-Perform Support Benchmarks

26 May 2025
Chang Liu
Haomin Zhang
Shiyu Xia
Zihao Chen
Chaofan Ding
Xin Yue
Huizhe Chen
Xinhan Di
ArXiv (abs)PDFHTML
Main:3 Pages
1 Figures
Bibliography:1 Pages
1 Tables
Abstract

Generating high-quality piano audio from video requires precise synchronization between visual cues and musical output, ensuring accurate semantic and temporalthis http URL, existing evaluation datasets do not fully capture the intricate synchronization required for piano music generation. A comprehensive benchmark is essential for two primary reasons: (1) existing metrics fail to reflect the complexity of video-to-piano music interactions, and (2) a dedicated benchmark dataset can provide valuable insights to accelerate progress in high-quality piano music generation. To address these challenges, we introduce the CoP Benchmark Dataset-a fully open-sourced, multimodal benchmark designed specifically for video-guided piano music generation. The proposed Chain-of-Perform (CoP) benchmark offers several compelling features: (1) detailed multimodal annotations, enabling precise semantic and temporal alignment between video content and piano audio via step-by-step Chain-of-Perform guidance; (2) a versatile evaluation framework for rigorous assessment of both general-purpose and specialized video-to-piano generation tasks; and (3) full open-sourcing of the dataset, annotations, and evaluation protocols. The dataset is publicly available atthis https URL, with a continuously updated leaderboard to promote ongoing research in this domain.

View on arXiv
@article{liu2025_2505.20038,
  title={ Towards Video to Piano Music Generation with Chain-of-Perform Support Benchmarks },
  author={ Chang Liu and Haomin Zhang and Shiyu Xia and Zihao Chen and Chaofan Ding and Xin Yue and Huizhe Chen and Xinhan Di },
  journal={arXiv preprint arXiv:2505.20038},
  year={ 2025 }
}
Comments on this paper