ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00681
29
0

Learning to Upsample and Upmix Audio in the Latent Domain

31 May 2025
Dimitrios Bralios
Paris Smaragdis
Jonah Casebeer
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:1 Pages
3 Tables
Abstract

Neural audio autoencoders create compact latent representations that preserve perceptually important information, serving as the foundation for both modern audio compression systems and generation approaches like next-token prediction and latent diffusion. Despite their prevalence, most audio processing operations, such as spatial and spectral up-sampling, still inefficiently operate on raw waveforms or spectral representations rather than directly on these compressed representations. We propose a framework that performs audio processing operations entirely within an autoencoder's latent space, eliminating the need to decode to raw audio formats. Our approach dramatically simplifies training by operating solely in the latent domain, with a latent L1 reconstruction term, augmented by a single latent adversarial discriminator. This contrasts sharply with raw-audio methods that typically require complex combinations of multi-scale losses and discriminators. Through experiments in bandwidth extension and mono-to-stereo up-mixing, we demonstrate computational efficiency gains of up to 100x while maintaining quality comparable to post-processing on raw audio. This work establishes a more efficient paradigm for audio processing pipelines that already incorporate autoencoders, enabling significantly faster and more resource-efficient workflows across various audio tasks.

View on arXiv
@article{bralios2025_2506.00681,
  title={ Learning to Upsample and Upmix Audio in the Latent Domain },
  author={ Dimitrios Bralios and Paris Smaragdis and Jonah Casebeer },
  journal={arXiv preprint arXiv:2506.00681},
  year={ 2025 }
}
Comments on this paper