ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.02532
32
7

PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

5 March 2023
Zhuqing Liu
Xin Zhang
Songtao Lu
Jia-Wei Liu
ArXivPDFHTML
Abstract

Recently, min-max optimization problems have received increasing attention due to their wide range of applications in machine learning (ML). However, most existing min-max solution techniques are either single-machine or distributed algorithms coordinated by a central server. In this paper, we focus on the decentralized min-max optimization for learning with domain constraints, where multiple agents collectively solve a nonconvex-strongly-concave min-max saddle point problem without coordination from any server. Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning. We propose an algorithm called PRECISION (proximal gradient-tracking and stochastic recursive variance reduction) that enjoys a convergence rate of O(1/T)O(1/T)O(1/T), where TTT is the maximum number of iterations. To further reduce sample complexity, we propose PRECISION+^++ with an adaptive batch size technique. We show that the fast O(1/T)O(1/T)O(1/T) convergence of PRECISION and PRECISION+^++ to an ϵ\epsilonϵ-stationary point imply O(ϵ−2)O(\epsilon^{-2})O(ϵ−2) communication complexity and O(mnϵ−2)O(m\sqrt{n}\epsilon^{-2})O(mn​ϵ−2) sample complexity, where mmm is the number of agents and nnn is the size of dataset at each agent. To our knowledge, this is the first work that achieves O(ϵ−2)O(\epsilon^{-2})O(ϵ−2) in both sample and communication complexities in decentralized min-max learning with domain constraints. Our experiments also corroborate the theoretical results.

View on arXiv
Comments on this paper