ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.10920
  4. Cited By
Deep Learning Theory Review: An Optimal Control and Dynamical Systems
  Perspective

Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective

28 August 2019
Guan-Horng Liu
Evangelos A. Theodorou
    AI4CE
ArXivPDFHTML

Papers citing "Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective"

17 / 17 papers shown
Title
Blending Optimal Control and Biologically Plausible Learning for Noise-Robust Physical Neural Networks
Blending Optimal Control and Biologically Plausible Learning for Noise-Robust Physical Neural Networks
S. Sunada
T. Niiyama
Kazutaka Kanno
Rin Nogami
André Röhm
Takato Awano
Atsushi Uchida
AI4CE
78
1
0
26 Feb 2025
Data Selection via Optimal Control for Language Models
Data Selection via Optimal Control for Language Models
Yuxian Gu
Li Dong
Hongning Wang
Y. Hao
Qingxiu Dong
Furu Wei
Minlie Huang
AI4CE
58
5
0
09 Oct 2024
Rethinking the Relationship between Recurrent and Non-Recurrent Neural
  Networks: A Study in Sparsity
Rethinking the Relationship between Recurrent and Non-Recurrent Neural Networks: A Study in Sparsity
Quincy Hershey
Randy Paffenroth
Harsh Nilesh Pathak
Simon Tavener
71
1
0
01 Apr 2024
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on
  Least Squares
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares
Anant Raj
Melih Barsbey
Mert Gurbuzbalaban
Lingjiong Zhu
Umut Simsekli
19
9
0
02 Jun 2022
Kullback-Leibler control for discrete-time nonlinear systems on
  continuous spaces
Kullback-Leibler control for discrete-time nonlinear systems on continuous spaces
Kaito Ito
Kenji Kashima
19
6
0
24 Mar 2022
Optimal learning rate schedules in high-dimensional non-convex
  optimization problems
Optimal learning rate schedules in high-dimensional non-convex optimization problems
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
23
7
0
09 Feb 2022
Physical deep learning based on optimal control of dynamical systems
Physical deep learning based on optimal control of dynamical systems
Genki Furuhata
T. Niiyama
S. Sunada
PINN
AI4CE
29
14
0
16 Dec 2020
Learn to Synchronize, Synchronize to Learn
Learn to Synchronize, Synchronize to Learn
Pietro Verzelli
Cesare Alippi
L. Livi
19
26
0
06 Oct 2020
A Differential Game Theoretic Neural Optimizer for Training Residual
  Networks
A Differential Game Theoretic Neural Optimizer for Training Residual Networks
Guan-Horng Liu
T. Chen
Evangelos A. Theodorou
24
2
0
17 Jul 2020
Responsive Safety in Reinforcement Learning by PID Lagrangian Methods
Responsive Safety in Reinforcement Learning by PID Lagrangian Methods
Adam Stooke
Joshua Achiam
Pieter Abbeel
31
287
0
08 Jul 2020
A Dynamical Systems Approach for Convergence of the Bayesian EM
  Algorithm
A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm
O. Romero
Subhro Das
Pin-Yu Chen
S. Pequito
8
1
0
23 Jun 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable
  Optimization Via Overparameterization From Depth
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
39
78
0
11 Mar 2020
Towards Robust and Stable Deep Learning Algorithms for Forward Backward
  Stochastic Differential Equations
Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations
Batuhan Güler
Alexis Laignelet
P. Parpas
OOD
21
16
0
25 Oct 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
244
350
0
14 Jun 2018
First-order Methods Almost Always Avoid Saddle Points
First-order Methods Almost Always Avoid Saddle Points
Jason D. Lee
Ioannis Panageas
Georgios Piliouras
Max Simchowitz
Michael I. Jordan
Benjamin Recht
ODL
95
83
0
20 Oct 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
463
11,715
0
09 Mar 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1