ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.09384
72
2

Sampling from Log-Concave Distributions over Polytopes via a Soft-Threshold Dikin Walk

19 June 2022
Oren Mangoubi
Nisheeth K. Vishnoi
ArXivPDFHTML
Abstract

Given a Lipschitz or smooth convex function  f:K→R\, f:K \to \mathbb{R}f:K→R for a bounded polytope K⊆RdK \subseteq \mathbb{R}^dK⊆Rd defined by mmm inequalities, we consider the problem of sampling from the log-concave distribution π(θ)∝e−f(θ)\pi(\theta) \propto e^{-f(\theta)}π(θ)∝e−f(θ) constrained to KKK. Interest in this problem derives from its applications to Bayesian inference and differentially private learning. Our main result is a generalization of the Dikin walk Markov chain to this setting that requires at most O((md+dL2R2)×mdω−1)log⁡(wδ))O((md + d L^2 R^2) \times md^{\omega-1}) \log(\frac{w}{\delta}))O((md+dL2R2)×mdω−1)log(δw​)) arithmetic operations to sample from π\piπ within error δ>0\delta>0δ>0 in the total variation distance from a www-warm start. Here LLL is the Lipschitz-constant of fff, KKK is contained in a ball of radius RRR and contains a ball of smaller radius rrr, and ω\omegaω is the matrix-multiplication constant. Our algorithm improves on the running time of prior works for a range of parameter settings important for the aforementioned learning applications. Technically, we depart from previous Dikin walks by adding a "soft-threshold" regularizer derived from the Lipschitz or smoothness properties of fff to the log-barrier function for KKK that allows our version of the Dikin walk to propose updates that have a high Metropolis acceptance ratio for fff, while at the same time remaining inside the polytope KKK.

View on arXiv
Comments on this paper