ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1503.00036
  4. Cited By
Norm-Based Capacity Control in Neural Networks
v1v2 (latest)

Norm-Based Capacity Control in Neural Networks

27 February 2015
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
ArXiv (abs)PDFHTML

Papers citing "Norm-Based Capacity Control in Neural Networks"

50 / 407 papers shown
Title
Explicit Regularisation in Gaussian Noise Injections
Explicit Regularisation in Gaussian Noise Injections
A. Camuto
M. Willetts
Umut Simsekli
Stephen J. Roberts
Chris Holmes
100
59
0
14 Jul 2020
Generalization bound of globally optimal non-convex neural network
  training: Transportation map estimation by infinite dimensional Langevin
  dynamics
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
Taiji Suzuki
63
21
0
11 Jul 2020
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks
Fabian Latorre
Paul Rolland
Nadav Hallak
Volkan Cevher
AAML
65
4
0
02 Jul 2020
A Revision of Neural Tangent Kernel-based Approaches for Neural Networks
Kyungsu Kim
A. Lozano
Eunho Yang
AAML
85
0
0
02 Jul 2020
Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep
  Networks
Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep Networks
Xiaochen Yang
Jean Honorio
136
0
0
01 Jul 2020
Relative Deviation Margin Bounds
Relative Deviation Margin Bounds
Corinna Cortes
M. Mohri
A. Suresh
84
14
0
26 Jun 2020
The Quenching-Activation Behavior of the Gradient Descent Dynamics for
  Two-layer Neural Network Models
The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models
Chao Ma
Lei Wu
E. Weinan
MLT
121
11
0
25 Jun 2020
Compositional Learning of Image-Text Query for Image Retrieval
Compositional Learning of Image-Text Query for Image Retrieval
Muhammad Umer Anwaar
Egor Labintcev
M. Kleinsteuber
CoGe
109
96
0
19 Jun 2020
What Do Neural Networks Learn When Trained With Random Labels?
What Do Neural Networks Learn When Trained With Random Labels?
Hartmut Maennel
Ibrahim Alabdulmohsin
Ilya O. Tolstikhin
R. Baldock
Olivier Bousquet
Sylvain Gelly
Daniel Keysers
FedML
165
90
0
18 Jun 2020
On Sparsity in Overparametrised Shallow ReLU Networks
On Sparsity in Overparametrised Shallow ReLU Networks
Jaume de Dios
Joan Bruna
63
14
0
18 Jun 2020
Revisiting minimum description length complexity in overparameterized
  models
Revisiting minimum description length complexity in overparameterized models
Raaz Dwivedi
Chandan Singh
Bin Yu
Martin J. Wainwright
57
5
0
17 Jun 2020
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
86
56
0
16 Jun 2020
Why Mixup Improves the Model Performance
Why Mixup Improves the Model Performance
Masanari Kimura
64
9
0
11 Jun 2020
Network size and weights size for memorization with two-layers neural
  networks
Network size and weights size for memorization with two-layers neural networks
Sébastien Bubeck
Ronen Eldan
Y. Lee
Dan Mikulincer
85
33
0
04 Jun 2020
Statistical Guarantees for Regularized Neural Networks
Statistical Guarantees for Regularized Neural Networks
Mahsa Taheri
Fang Xie
Johannes Lederer
113
39
0
30 May 2020
A function space analysis of finite neural networks with insights from
  sampling theory
A function space analysis of finite neural networks with insights from sampling theory
Raja Giryes
72
6
0
15 Apr 2020
Deep Networks as Logical Circuits: Generalization and Interpretation
Deep Networks as Logical Circuits: Generalization and Interpretation
Christopher Snyder
S. Vishwanath
FAttAI4CE
14
2
0
25 Mar 2020
Hyperplane Arrangements of Trained ConvNets Are Biased
Hyperplane Arrangements of Trained ConvNets Are Biased
Matteo Gamba
S. Carlsson
Hossein Azizpour
Mårten Björkman
41
5
0
17 Mar 2020
Dropout: Explicit Forms and Capacity Control
Dropout: Explicit Forms and Capacity Control
R. Arora
Peter L. Bartlett
Poorya Mianjy
Nathan Srebro
119
38
0
06 Mar 2020
Predicting Neural Network Accuracy from Weights
Predicting Neural Network Accuracy from Weights
Thomas Unterthiner
Daniel Keysers
Sylvain Gelly
Olivier Bousquet
Ilya O. Tolstikhin
80
108
0
26 Feb 2020
De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and
  Non-smooth Predictors
De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and Non-smooth Predictors
A. Banerjee
Tiancong Chen
Yingxue Zhou
BDL
86
8
0
23 Feb 2020
On the generalization of bayesian deep nets for multi-class
  classification
On the generalization of bayesian deep nets for multi-class classification
Yossi Adi
Yaniv Nemcovsky
Alex Schwing
Tamir Hazan
BDLUQCV
21
1
0
23 Feb 2020
Distance-Based Regularisation of Deep Networks for Fine-Tuning
Distance-Based Regularisation of Deep Networks for Fine-Tuning
Henry Gouk
Timothy M. Hospedales
Massimiliano Pontil
60
56
0
19 Feb 2020
Predicting trends in the quality of state-of-the-art neural networks
  without access to training or testing data
Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
Charles H. Martin
Tongsu Peng
Peng
Michael W. Mahoney
116
110
0
17 Feb 2020
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for
  Multiscale Objective Function
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Lingkai Kong
Molei Tao
57
23
0
14 Feb 2020
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural
  Networks
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen
Yuan Cao
Quanquan Gu
Tong Zhang
MLT
80
10
0
10 Feb 2020
Quasi-Equivalence of Width and Depth of Neural Networks
Quasi-Equivalence of Width and Depth of Neural Networks
Fenglei Fan
Rongjie Lai
Ge Wang
72
11
0
06 Feb 2020
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Albert Senen-Cerda
J. Sanders
72
8
0
06 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
115
16
0
04 Feb 2020
Generative Modeling with Denoising Auto-Encoders and Langevin Sampling
Generative Modeling with Denoising Auto-Encoders and Langevin Sampling
Adam Block
Youssef Mroueh
Alexander Rakhlin
DiffM
115
103
0
31 Jan 2020
Understanding Generalization in Deep Learning via Tensor Methods
Understanding Generalization in Deep Learning via Tensor Methods
Jingling Li
Yanchao Sun
Jiahao Su
Taiji Suzuki
Furong Huang
124
28
0
14 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
317
0
08 Jan 2020
Relative Flatness and Generalization
Relative Flatness and Generalization
Henning Petzka
Michael Kamp
Linara Adilova
C. Sminchisescu
Mario Boley
87
78
0
03 Jan 2020
Deep learning architectures for nonlinear operator functions and
  nonlinear inverse problems
Deep learning architectures for nonlinear operator functions and nonlinear inverse problems
Maarten V. de Hoop
Matti Lassas
C. Wong
78
26
0
23 Dec 2019
Analytic expressions for the output evolution of a deep neural network
Analytic expressions for the output evolution of a deep neural network
Anastasia Borovykh
29
0
0
18 Dec 2019
Observational Overfitting in Reinforcement Learning
Observational Overfitting in Reinforcement Learning
Xingyou Song
Yiding Jiang
Stephen Tu
Yilun Du
Behnam Neyshabur
OffRL
132
140
0
06 Dec 2019
Fantastic Generalization Measures and Where to Find Them
Fantastic Generalization Measures and Where to Find Them
Yiding Jiang
Behnam Neyshabur
H. Mobahi
Dilip Krishnan
Samy Bengio
AI4CE
152
612
0
04 Dec 2019
Stationary Points of Shallow Neural Networks with Quadratic Activation
  Function
Stationary Points of Shallow Neural Networks with Quadratic Activation Function
D. Gamarnik
Eren C. Kizildag
Ilias Zadik
43
14
0
03 Dec 2019
The intriguing role of module criticality in the generalization of deep
  networks
The intriguing role of module criticality in the generalization of deep networks
Niladri S. Chatterji
Behnam Neyshabur
Hanie Sedghi
89
53
0
02 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU
  Networks?
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
77
123
0
27 Nov 2019
Information-Theoretic Local Minima Characterization and Regularization
Information-Theoretic Local Minima Characterization and Regularization
Zhiwei Jia
Hao Su
80
19
0
19 Nov 2019
Global Capacity Measures for Deep ReLU Networks via Path Sampling
Global Capacity Measures for Deep ReLU Networks via Path Sampling
Ryan Theisen
Jason M. Klusowski
Huan Wang
N. Keskar
Caiming Xiong
R. Socher
36
3
0
22 Oct 2019
Improved Generalization Bounds of Group Invariant / Equivariant Deep
  Networks via Quotient Feature Spaces
Improved Generalization Bounds of Group Invariant / Equivariant Deep Networks via Quotient Feature Spaces
Akiyoshi Sannai
Masaaki Imaizumi
M. Kawano
MLT
55
31
0
15 Oct 2019
Generalization Bounds for Neural Networks via Approximate Description
  Length
Generalization Bounds for Neural Networks via Approximate Description Length
Amit Daniely
Elad Granot
86
21
0
13 Oct 2019
Learning from Multiple Corrupted Sources, with Application to Learning
  from Label Proportions
Learning from Multiple Corrupted Sources, with Application to Learning from Label Proportions
Clayton Scott
Jianxin Zhang
32
7
0
10 Oct 2019
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAMLOOD
89
85
0
09 Oct 2019
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The
  Multivariate Case
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case
Greg Ongie
Rebecca Willett
Daniel Soudry
Nathan Srebro
113
161
0
03 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
67
116
0
03 Oct 2019
Generalization Bounds for Convolutional Neural Networks
Generalization Bounds for Convolutional Neural Networks
Shan Lin
Jingwei Zhang
MLT
60
35
0
03 Oct 2019
How does topology influence gradient propagation and model performance
  of deep networks with DenseNet-type skip connections?
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?
Kartikeya Bhardwaj
Guihong Li
R. Marculescu
58
1
0
02 Oct 2019
Previous
123456789
Next