ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.02958
  4. Cited By
Efficient Full-Matrix Adaptive Regularization

Efficient Full-Matrix Adaptive Regularization

8 June 2018
Naman Agarwal
Brian Bullins
Xinyi Chen
Elad Hazan
Karan Singh
Cyril Zhang
Yi Zhang
ArXivPDFHTML

Papers citing "Efficient Full-Matrix Adaptive Regularization"

6 / 6 papers shown
Title
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Scalable Second Order Optimization for Deep Learning
Scalable Second Order Optimization for Deep Learning
Rohan Anil
Vineet Gupta
Tomer Koren
Kevin Regan
Y. Singer
ODL
27
29
0
20 Feb 2020
Matrix-Free Preconditioning in Online Learning
Matrix-Free Preconditioning in Online Learning
Ashok Cutkosky
Tamás Sarlós
ODL
32
16
0
29 May 2019
Why gradient clipping accelerates training: A theoretical justification
  for adaptivity
Why gradient clipping accelerates training: A theoretical justification for adaptivity
Jiaming Zhang
Tianxing He
S. Sra
Ali Jadbabaie
30
446
0
28 May 2019
Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Jihun Yun
A. Lozano
Eunho Yang
ODL
17
5
0
26 May 2019
Escaping Saddle Points with Adaptive Gradient Methods
Escaping Saddle Points with Adaptive Gradient Methods
Matthew Staib
Sashank J. Reddi
Satyen Kale
Sanjiv Kumar
S. Sra
ODL
14
73
0
26 Jan 2019
1