ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.01997
  4. Cited By
Implicit Regularization in Over-parameterized Neural Networks

Implicit Regularization in Over-parameterized Neural Networks

5 March 2019
M. Kubo
Ryotaro Banno
Hidetaka Manabe
Masataka Minoji
ArXivPDFHTML

Papers citing "Implicit Regularization in Over-parameterized Neural Networks"

7 / 7 papers shown
Title
Towards a Theoretical Foundation of Policy Optimization for Learning
  Control Policies
Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies
Bin Hu
Kaipeng Zhang
Na Li
M. Mesbahi
Maryam Fazel
Tamer Bacsar
87
27
0
10 Oct 2022
On Regularizing Coordinate-MLPs
On Regularizing Coordinate-MLPs
Sameera Ramasinghe
L. MacDonald
Simon Lucey
158
5
0
01 Feb 2022
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
27
77
0
10 Dec 2019
Policy Optimization for $\mathcal{H}_2$ Linear Control with
  $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global
  Convergence
Policy Optimization for H2\mathcal{H}_2H2​ Linear Control with H∞\mathcal{H}_\inftyH∞​ Robustness Guarantee: Implicit Regularization and Global Convergence
Kaipeng Zhang
Bin Hu
Tamer Basar
24
119
0
21 Oct 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
244
349
0
14 Jun 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
1