ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00325
  4. Cited By
Aggregated Momentum: Stability Through Passive Damping

Aggregated Momentum: Stability Through Passive Damping

1 April 2018
James Lucas
Shengyang Sun
R. Zemel
Roger C. Grosse
ArXivPDFHTML

Papers citing "Aggregated Momentum: Stability Through Passive Damping"

19 / 19 papers shown
Title
Diffusion Sampling with Momentum for Mitigating Divergence Artifacts
Diffusion Sampling with Momentum for Mitigating Divergence Artifacts
Suttisak Wizadwongsa
Worameth Chinchuthakun
Pramook Khungurn
Amit Raj
Supasorn Suwajanakorn
DiffM
51
2
0
20 Jul 2023
Bidirectional Looking with A Novel Double Exponential Moving Average to
  Adaptive and Non-adaptive Momentum Optimizers
Bidirectional Looking with A Novel Double Exponential Moving Average to Adaptive and Non-adaptive Momentum Optimizers
Yineng Chen
Z. Li
Lefei Zhang
Bo Du
Hai Zhao
33
4
0
02 Jul 2023
Improving physics-informed neural networks with meta-learned
  optimization
Improving physics-informed neural networks with meta-learned optimization
Alexander Bihlo
PINN
36
18
0
13 Mar 2023
Learning to Optimize for Reinforcement Learning
Learning to Optimize for Reinforcement Learning
Qingfeng Lan
Rupam Mahmood
Shuicheng Yan
Zhongwen Xu
OffRL
31
6
0
03 Feb 2023
Transformer-Based Learned Optimization
Transformer-Based Learned Optimization
Erik Gartner
Luke Metz
Mykhaylo Andriluka
C. Freeman
C. Sminchisescu
23
11
0
02 Dec 2022
Multilevel-in-Layer Training for Deep Neural Network Regression
Multilevel-in-Layer Training for Deep Neural Network Regression
Colin Ponce
Ruipeng Li
Christina Mao
P. Vassilevski
AI4CE
19
1
0
11 Nov 2022
A Closer Look at Learned Optimization: Stability, Robustness, and
  Inductive Biases
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
James Harrison
Luke Metz
Jascha Narain Sohl-Dickstein
49
22
0
22 Sep 2022
On the Limitations of Stochastic Pre-processing Defenses
On the Limitations of Stochastic Pre-processing Defenses
Yue Gao
Ilia Shumailov
Kassem Fawaz
Nicolas Papernot
AAML
SILM
47
31
0
19 Jun 2022
Practical tradeoffs between memory, compute, and performance in learned
  optimizers
Practical tradeoffs between memory, compute, and performance in learned optimizers
Luke Metz
C. Freeman
James Harrison
Niru Maheswaranathan
Jascha Narain Sohl-Dickstein
41
32
0
22 Mar 2022
Amortized Proximal Optimization
Amortized Proximal Optimization
Juhan Bae
Paul Vicol
Jeff Z. HaoChen
Roger C. Grosse
ODL
29
14
0
28 Feb 2022
A More Stable Accelerated Gradient Method Inspired by Continuous-Time
  Perspective
A More Stable Accelerated Gradient Method Inspired by Continuous-Time Perspective
Yasong Feng
Weiguo Gao
23
0
0
09 Dec 2021
AdaInject: Injection Based Adaptive Gradient Descent Optimizers for
  Convolutional Neural Networks
AdaInject: Injection Based Adaptive Gradient Descent Optimizers for Convolutional Neural Networks
S. Dubey
S. H. Shabbeer Basha
S. Singh
B. B. Chaudhuri
ODL
50
9
0
26 Sep 2021
Reverse engineering learned optimizers reveals known and novel
  mechanisms
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan
David Sussillo
Luke Metz
Ruoxi Sun
Jascha Narain Sohl-Dickstein
22
21
0
04 Nov 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
36
79
0
17 Sep 2020
Demon: Improved Neural Network Training with Momentum Decay
Demon: Improved Neural Network Training with Momentum Decay
John Chen
Cameron R. Wolfe
Zhaoqi Li
Anastasios Kyrillidis
ODL
24
15
0
11 Oct 2019
Lookahead Optimizer: k steps forward, 1 step back
Lookahead Optimizer: k steps forward, 1 step back
Michael Ruogu Zhang
James Lucas
Geoffrey E. Hinton
Jimmy Ba
ODL
54
721
0
19 Jul 2019
Using learned optimizers to make models robust to input noise
Using learned optimizers to make models robust to input noise
Luke Metz
Niru Maheswaranathan
Jonathon Shlens
Jascha Narain Sohl-Dickstein
E. D. Cubuk
VLM
OOD
23
26
0
08 Jun 2019
Quasi-hyperbolic momentum and Adam for deep learning
Quasi-hyperbolic momentum and Adam for deep learning
Jerry Ma
Denis Yarats
ODL
84
129
0
16 Oct 2018
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,157
0
04 Mar 2015
1