ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1410.4307
27
178

Social Learning and Distributed Hypothesis Testing

16 October 2014
Anusha Lalitha
T. Javidi
Anand D. Sarwate
ArXivPDFHTML
Abstract

This paper considers a problem of distributed hypothesis testing and social learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a "non-Bayesian" linear consensus using the log-beliefs of their neighbors. In this paper we show that under mild assumptions, the belief of any node in any incorrect hypothesis converges to zero exponentially fast, and we characterize the exponential rate of learning which is given in terms of the network structure and the divergences between the observations' distributions. Our main result is the concentration property established on the rate of convergence.

View on arXiv
Comments on this paper