ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11964
70
0

Entropy-regularized Gradient Estimators for Approximate Bayesian Inference

15 March 2025
Jasmeet Kaur
    BDL
    UQCV
ArXivPDFHTML
Abstract

Effective uncertainty quantification is important for training modern predictive models with limited data, enhancing both accuracy and robustness. While Bayesian methods are effective for this purpose, they can be challenging to scale. When employing approximate Bayesian inference, ensuring the quality of samples from the posterior distribution in a computationally efficient manner is essential. This paper addresses the estimation of the Bayesian posterior to generate diverse samples by approximating the gradient flow of the Kullback-Leibler (KL) divergence and the cross entropy of the target approximation under the metric induced by the Stein Operator. It presents empirical evaluations on classification tasks to assess the method's performance and discuss its effectiveness for Model-Based Reinforcement Learning that uses uncertainty-aware network dynamics models.

View on arXiv
@article{kaur2025_2503.11964,
  title={ Entropy-regularized Gradient Estimators for Approximate Bayesian Inference },
  author={ Jasmeet Kaur },
  journal={arXiv preprint arXiv:2503.11964},
  year={ 2025 }
}
Comments on this paper