ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.10868
  4. Cited By
Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

22 July 2021
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
    FedML
ArXivPDFHTML

Papers citing "Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time"

3 / 3 papers shown
Title
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Jialiang Cheng
Ning Gao
Yun Yue
Zhiling Ye
Jiadi Jiang
Jian Sha
OffRL
79
0
0
10 Dec 2024
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
43
16
0
05 Dec 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
41
49
0
24 Jan 2021
1