ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.18908
22
9

Estimating the Rate-Distortion Function by Wasserstein Gradient Descent

29 October 2023
Yibo Yang
Stephan Eckstein
Marcel Nutz
Stephan Mandt
ArXivPDFHTML
Abstract

In the theory of lossy compression, the rate-distortion (R-D) function R(D)R(D)R(D) describes how much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion). Obtaining R(D)R(D)R(D) for a given data source establishes the fundamental performance limit for all compression algorithms. We propose a new method to estimate R(D)R(D)R(D) from the perspective of optimal transport. Unlike the classic Blahut--Arimoto algorithm which fixes the support of the reproduction distribution in advance, our Wasserstein gradient descent algorithm learns the support of the optimal reproduction distribution by moving particles. We prove its local convergence and analyze the sample complexity of our R-D estimator based on a connection to entropic optimal transport. Experimentally, we obtain comparable or tighter bounds than state-of-the-art neural network methods on low-rate sources while requiring considerably less tuning and computation effort. We also highlight a connection to maximum-likelihood deconvolution and introduce a new class of sources that can be used as test cases with known solutions to the R-D problem.

View on arXiv
Comments on this paper