ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.10172
48
0

Mean and Variance Estimation Complexity in Arbitrary Distributions via Wasserstein Minimization

20 January 2025
Valentio Iverson
Stephen Vavasis
ArXivPDFHTML
Abstract

Parameter estimation is a fundamental challenge in machine learning, crucial for tasks such as neural network weight fitting and Bayesian inference. This paper focuses on the complexity of estimating translation μ∈Rl\boldsymbol{\mu} \in \mathbb{R}^lμ∈Rl and shrinkage σ∈R++\sigma \in \mathbb{R}_{++}σ∈R++​ parameters for a distribution of the form 1σlf0(x−μσ)\frac{1}{\sigma^l} f_0 \left( \frac{\boldsymbol{x} - \boldsymbol{\mu}}{\sigma} \right)σl1​f0​(σx−μ​), where f0f_0f0​ is a known density in Rl\mathbb{R}^lRl given nnn samples. We highlight that while the problem is NP-hard for Maximum Likelihood Estimation (MLE), it is possible to obtain ε\varepsilonε-approximations for arbitrary ε>0\varepsilon > 0ε>0 within poly(1ε)\text{poly} \left( \frac{1}{\varepsilon} \right)poly(ε1​) time using the Wasserstein distance.

View on arXiv
Comments on this paper