5
0

Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models

Teysir Baoueb
Xiaoyu Bie
Xi Wang
Gaël Richard
Author Contacts:
Main:5 Pages
3 Figures
Bibliography:2 Pages
3 Tables
Abstract

Breakthroughs in text-to-music generation models are transforming the creative landscape, equipping musicians with innovative tools for composition and experimentation like never before. However, controlling the generation process to achieve a specific desired outcome remains a significant challenge. Even a minor change in the text prompt, combined with the same random seed, can drastically alter the generated piece. In this paper, we explore the application of existing text-to-music diffusion models for instrument editing. Specifically, for an existing audio track, we aim to leverage a pretrained text-to-music diffusion model to edit the instrument while preserving the underlying content. Based on the insight that the model first focuses on the overall structure or content of the audio, then adds instrument information, and finally refines the quality, we show that selecting a well-chosen intermediate timestep, identified through an instrument classifier, yields a balance between preserving the original piece's content and achieving the desired timbre. Our method does not require additional training of the text-to-music diffusion model, nor does it compromise the generation process's speed.

View on arXiv
@article{baoueb2025_2506.15530,
  title={ Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models },
  author={ Teysir Baoueb and Xiaoyu Bie and Xi Wang and Gaël Richard },
  journal={arXiv preprint arXiv:2506.15530},
  year={ 2025 }
}
Comments on this paper