ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.07521
17
14

Fast Learning of Clusters and Topics via Sparse Posteriors

23 September 2016
M. C. Hughes
Erik B. Sudderth
ArXivPDFHTML
Abstract

Mixture models and topic models generate each observation from a single cluster, but standard variational posteriors for each observation assign positive probability to all possible clusters. This requires dense storage and runtime costs that scale with the total number of clusters, even though typically only a few clusters have significant posterior mass for any data point. We propose a constrained family of sparse variational distributions that allow at most LLL non-zero entries, where the tunable threshold LLL trades off speed for accuracy. Previous sparse approximations have used hard assignments (L=1L=1L=1), but we find that moderate values of L>1L>1L>1 provide superior performance. Our approach easily integrates with stochastic or incremental optimization algorithms to scale to millions of examples. Experiments training mixture models of image patches and topic models for news articles show that our approach produces better-quality models in far less time than baseline methods.

View on arXiv
Comments on this paper