ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08535
12
26

Learning over Multitask Graphs -- Part I: Stability Analysis

22 May 2018
Roula Nassif
Stefan Vlaski
Cedric Richard
Ali H. Sayed
ArXivPDFHTML
Abstract

This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness condition softens the transition in the tasks among adjacent nodes and allows incorporating information about the graph structure into the solution of the inference problem. A diffusion strategy is devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relies on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We show in this Part I of the work, under conditions on the step-size parameter, that the adaptive strategy induces a contraction mapping and leads to small estimation errors on the order of the small step-size. The results in the accompanying Part II will reveal explicitly the influence of the network topology and the regularization strength on the network performance and will provide insights into the design of effective multitask strategies for distributed inference over networks.

View on arXiv
Comments on this paper