ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.09356
  4. Cited By
Convergence Analyses of Online ADAM Algorithm in Convex Setting and
  Two-Layer ReLU Neural Network
v1v2 (latest)

Convergence Analyses of Online ADAM Algorithm in Convex Setting and Two-Layer ReLU Neural Network

22 May 2019
Biyi Fang
Diego Klabjan
ArXiv (abs)PDFHTML

Papers citing "Convergence Analyses of Online ADAM Algorithm in Convex Setting and Two-Layer ReLU Neural Network"

5 / 5 papers shown
Title
Fast Adversarial Training with Adaptive Step Size
Fast Adversarial Training with Adaptive Step Size
Zhichao Huang
Yanbo Fan
Chen Liu
Weizhong Zhang
Yong Zhang
Mathieu Salzmann
Sabine Süsstrunk
Jue Wang
AAML
79
33
0
06 Jun 2022
Maximizing Communication Efficiency for Large-scale Training via 0/1
  Adam
Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam
Yucheng Lu
Conglong Li
Minjia Zhang
Christopher De Sa
Yuxiong He
OffRLAI4CE
97
21
0
12 Feb 2022
A new regret analysis for Adam-type algorithms
A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu
Yura Malitsky
P. Mertikopoulos
Volkan Cevher
ODL
95
43
0
21 Mar 2020
A Simple Convergence Proof of Adam and Adagrad
A Simple Convergence Proof of Adam and Adagrad
Alexandre Défossez
Léon Bottou
Francis R. Bach
Nicolas Usunier
140
159
0
05 Mar 2020
Parallel and distributed asynchronous adaptive stochastic gradient
  methods
Parallel and distributed asynchronous adaptive stochastic gradient methods
Yangyang Xu
Yibo Xu
Yonggui Yan
Colin Sutcher-Shepard
Leopold Grinberg
Jiewei Chen
60
2
0
21 Feb 2020
1