ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.10423
  4. Cited By
Quantization Avoids Saddle Points in Distributed Optimization

Quantization Avoids Saddle Points in Distributed Optimization

15 March 2024
Yanan Bo
Yongqiang Wang
    MQ
ArXivPDFHTML

Papers citing "Quantization Avoids Saddle Points in Distributed Optimization"

1 / 1 papers shown
Title
Locally Differentially Private Gradient Tracking for Distributed Online
  Learning over Directed Graphs
Locally Differentially Private Gradient Tracking for Distributed Online Learning over Directed Graphs
Ziqin Chen
Yongqiang Wang
FedML
24
2
0
24 Oct 2023
1