ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1206.2689
48
40

Approximation algorithms for the normalizing constant of Gibbs distributions

12 June 2012
M. Huber
ArXivPDFHTML
Abstract

Consider a family of distributions {πβ}\{\pi_{\beta}\}{πβ​} where X∼πβX\sim\pi_{\beta}X∼πβ​ means that P(X=x)=exp⁡(−βH(x))/Z(β)\mathbb{P}(X=x)=\exp(-\beta H(x))/Z(\beta)P(X=x)=exp(−βH(x))/Z(β). Here Z(β)Z(\beta)Z(β) is the proper normalizing constant, equal to ∑xexp⁡(−βH(x))\sum_x\exp(-\beta H(x))∑x​exp(−βH(x)). Then {πβ}\{\pi_{\beta}\}{πβ​} is known as a Gibbs distribution, and Z(β)Z(\beta)Z(β) is the partition function. This work presents a new method for approximating the partition function to a specified level of relative accuracy using only a number of samples, that is, O(ln⁡(Z(β))ln⁡(ln⁡(Z(β))))O(\ln(Z(\beta))\ln(\ln(Z(\beta))))O(ln(Z(β))ln(ln(Z(β)))) when Z(0)≥1Z(0)\geq1Z(0)≥1. This is a sharp improvement over previous, similar approaches that used a much more complicated algorithm, requiring O(ln⁡(Z(β))ln⁡(ln⁡(Z(β)))5)O(\ln(Z(\beta))\ln(\ln(Z(\beta)))^5)O(ln(Z(β))ln(ln(Z(β)))5) samples.

View on arXiv
Comments on this paper