ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.4986
24
67

A Scalable Asynchronous Distributed Algorithm for Topic Modeling

16 December 2014
Hsiang-Fu Yu
Cho-Jui Hsieh
Hyokun Yun
S.V.N. Vishwanathan
Inderjit S. Dhillon
ArXivPDFHTML
Abstract

Learning meaningful topic models with massive document collections which contain millions of documents and billions of tokens is challenging because of two reasons: First, one needs to deal with a large number of topics (typically in the order of thousands). Second, one needs a scalable and efficient way of distributing the computation across multiple machines. In this paper we present a novel algorithm F+Nomad LDA which simultaneously tackles both these problems. In order to handle large number of topics we use an appropriately modified Fenwick tree. This data structure allows us to sample from a multinomial distribution over TTT items in O(log⁡T)O(\log T)O(logT) time. Moreover, when topic counts change the data structure can be updated in O(log⁡T)O(\log T)O(logT) time. In order to distribute the computation across multiple processor we present a novel asynchronous framework inspired by the Nomad algorithm of \cite{YunYuHsietal13}. We show that F+Nomad LDA significantly outperform state-of-the-art on massive problems which involve millions of documents, billions of words, and thousands of topics.

View on arXiv
Comments on this paper