ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10793
15
2

Optimal Scaling for the Proximal Langevin Algorithm in High Dimensions

21 April 2022
Natesh S. Pillai
ArXivPDFHTML
Abstract

The Metropolis-adjusted Langevin (MALA) algorithm is a sampling algorithm that incorporates the gradient of the logarithm of the target density in its proposal distribution. In an earlier joint work \citet{pill:stu:12}, the author had extended the seminal work of \cite{Robe:Rose:98} and showed that in stationarity, MALA applied to an N−N-N−dimensional approximation of the target will take O(N13){\cal O}(N^{\frac13})O(N31​) steps to explore its target measure. It was also shown that the MALA algorithm is optimized at an average acceptance probability of 0.5740.5740.574. In \citet{pere:16}, the author introduced the proximal MALA algorithm where the gradient of the log target density is replaced by the proximal function. In this paper, we show that for a wide class of twice differentiable target densities, the proximal MALA enjoys the same optimal scaling as that of MALA in high dimensions and also has an average optimal acceptance probability of 0.5740.5740.574. The results of this paper thus give the following practically useful guideline: for smooth target densities where it is expensive to compute the gradient while implementing MALA, users may replace the gradient with the corresponding proximal function (that can be often computed relatively cheaply via convex optimization) \emph{without} losing any efficiency gains from optimal scaling. This confirms some of the empirical observations made in \cite{pere:16}.

View on arXiv
Comments on this paper