ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00290
34
3
v1v2 (latest)

MTM Dataset for Joint Representation Learning among Sheet Music, Lyrics, and Musical Audio?

1 December 2020
Donghuo Zeng
Yi Yu
K. Oyama
ArXiv (abs)PDFHTMLGithub (9★)
Abstract

We introduce the music Ternary Modalities Dataset (MTM Dataset), which is created by our group to learn joint representations among music three modalities in music information retrieval (MIR), including three types of cross-modal retrieval. Learning joint representations for cross-modal retrieval among three modalities has been limited because of the limited availability of large dataset including three or more modalities. The goal of MTM Dataset collection is to overcome the constraints by extending music notes to sheet music and music audio, and build music-note and syllable fine grained alignment, such that the dataset can be used to learn joint representation across multimodal music data. The MTM Dataset provides three modalities: sheet music, lyrics and music audio and their feature extracted by pre-trained models. In this paper, we describe the dataset and how it was built, and evaluate some baselines for cross-modal retrieval tasks. The dataset and usage examples are available at https://github.com/MorningBooks/MTM-Dataset.

View on arXiv
Comments on this paper