ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.09705
13
57

Distributed Online Optimization with Long-Term Constraints

20 December 2019
Deming Yuan
Alexandre Proutière
Guodong Shi
ArXivPDFHTML
Abstract

We consider distributed online convex optimization problems, where the distributed system consists of various computing units connected through a time-varying communication graph. In each time step, each computing unit selects a constrained vector, experiences a loss equal to an arbitrary convex function evaluated at this vector, and may communicate to its neighbors in the graph. The objective is to minimize the system-wide loss accumulated over time. We propose a decentralized algorithm with regret and cumulative constraint violation in O(Tmax⁡{c,1−c})\mathcal{O}(T^{\max\{c,1-c\} })O(Tmax{c,1−c}) and O(T1−c/2)\mathcal{O}(T^{1-c/2})O(T1−c/2), respectively, for any c∈(0,1)c\in (0,1)c∈(0,1), where TTT is the time horizon. When the loss functions are strongly convex, we establish improved regret and constraint violation upper bounds in O(log⁡(T))\mathcal{O}(\log(T))O(log(T)) and O(Tlog⁡(T))\mathcal{O}(\sqrt{T\log(T)})O(Tlog(T)​). These regret scalings match those obtained by state-of-the-art algorithms and fundamental limits in the corresponding centralized online optimization problem (for both convex and strongly convex loss functions). In the case of bandit feedback, the proposed algorithms achieve a regret and constraint violation in O(Tmax⁡{c,1−c/3})\mathcal{O}(T^{\max\{c,1-c/3 \} })O(Tmax{c,1−c/3}) and O(T1−c/2)\mathcal{O}(T^{1-c/2})O(T1−c/2) for any c∈(0,1)c\in (0,1)c∈(0,1). We numerically illustrate the performance of our algorithms for the particular case of distributed online regularized linear regression problems.

View on arXiv
Comments on this paper