ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1503.00036
  4. Cited By
Norm-Based Capacity Control in Neural Networks
v1v2 (latest)

Norm-Based Capacity Control in Neural Networks

27 February 2015
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
ArXiv (abs)PDFHTML

Papers citing "Norm-Based Capacity Control in Neural Networks"

50 / 407 papers shown
Title
Exact Gap between Generalization Error and Uniform Convergence in Random
  Feature Models
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models
Zitong Yang
Yu Bai
Song Mei
71
18
0
08 Mar 2021
Formalizing Generalization and Robustness of Neural Networks to Weight
  Perturbations
Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Yu-Lin Tsai
Chia-Yi Hsu
Chia-Mu Yu
Pin-Yu Chen
AAMLOOD
56
27
0
03 Mar 2021
Self-Regularity of Non-Negative Output Weights for Overparameterized
  Two-Layer Neural Networks
Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks
D. Gamarnik
Eren C. Kizildaug
Ilias Zadik
88
1
0
02 Mar 2021
LocalDrop: A Hybrid Regularization for Deep Neural Networks
LocalDrop: A Hybrid Regularization for Deep Neural Networks
Ziqing Lu
Chang Xu
Bo Du
Takashi Ishida
Lefei Zhang
Masashi Sugiyama
69
15
0
01 Mar 2021
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent
  Theory for Deep Neural Networks
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Ryumei Nakada
Masaaki Imaizumi
49
2
0
28 Feb 2021
Inductive Bias of Multi-Channel Linear Convolutional Networks with
  Bounded Weight Norm
Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm
Meena Jagadeesan
Ilya P. Razenshteyn
Suriya Gunasekar
113
21
0
24 Feb 2021
Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized
  Autoencoders
Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders
Oskar Allerbo
Rebecka Jörnsten
43
1
0
22 Feb 2021
Generalization bounds for graph convolutional neural networks via
  Rademacher complexity
Generalization bounds for graph convolutional neural networks via Rademacher complexity
Shaogao Lv
GNN
120
16
0
20 Feb 2021
A Law of Robustness for Weight-bounded Neural Networks
Hisham Husain
Borja Balle
62
1
0
16 Feb 2021
Effects of quantum resources on the statistical complexity of quantum
  circuits
Effects of quantum resources on the statistical complexity of quantum circuits
Kaifeng Bu
D. E. Koh
Lu Li
Qingxian Luo
Yaobo Zhang
51
13
0
05 Feb 2021
Information-Theoretic Generalization Bounds for Stochastic Gradient
  Descent
Information-Theoretic Generalization Bounds for Stochastic Gradient Descent
Gergely Neu
Gintare Karolina Dziugaite
Mahdi Haghifam
Daniel M. Roy
128
90
0
01 Feb 2021
Activation Functions in Artificial Neural Networks: A Systematic
  Overview
Activation Functions in Artificial Neural Networks: A Systematic Overview
Johannes Lederer
FAttAI4CE
66
47
0
25 Jan 2021
Robustness to Augmentations as a Generalization metric
Robustness to Augmentations as a Generalization metric
Sumukh K Aithal
D. Kashyap
Natarajan Subramanyam
OOD
36
18
0
16 Jan 2021
On the statistical complexity of quantum circuits
On the statistical complexity of quantum circuits
Kaifeng Bu
D. E. Koh
Lu Li
Qingxian Luo
Yaobo Zhang
106
43
0
15 Jan 2021
Heating up decision boundaries: isocapacitory saturation, adversarial
  scenarios and generalization bounds
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds
B. Georgiev
L. Franken
Mayukh Mukherjee
AAML
31
1
0
15 Jan 2021
Learning with Gradient Descent and Weakly Convex Losses
Learning with Gradient Descent and Weakly Convex Losses
Dominic Richards
Michael G. Rabbat
MLT
71
15
0
13 Jan 2021
Data augmentation and image understanding
Data augmentation and image understanding
Alex Hernandez-Garcia
31
6
0
28 Dec 2020
Recent advances in deep learning theory
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
130
51
0
20 Dec 2020
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Yiding Jiang
Pierre Foret
Scott Yak
Daniel M. Roy
H. Mobahi
Gintare Karolina Dziugaite
Samy Bengio
Suriya Gunasekar
Isabelle M Guyon
Behnam Neyshabur Google Research
OOD
74
55
0
14 Dec 2020
Convex Regularization Behind Neural Reconstruction
Convex Regularization Behind Neural Reconstruction
Arda Sahiner
Morteza Mardani
Batu Mehmet Ozturkler
Mert Pilanci
John M. Pauly
67
25
0
09 Dec 2020
Semantically Robust Unpaired Image Translation for Data with Unmatched
  Semantics Statistics
Semantically Robust Unpaired Image Translation for Data with Unmatched Semantics Statistics
Zhiwei Jia
Bodi Yuan
Kangkang Wang
Hong Wu
David Clifford
Zhiqiang Yuan
Hao Su
VLM
101
23
0
09 Dec 2020
Generalization bounds for deep learning
Generalization bounds for deep learning
Guillermo Valle Pérez
A. Louis
BDL
82
45
0
07 Dec 2020
Why Unsupervised Deep Networks Generalize
Why Unsupervised Deep Networks Generalize
Anita de Mello Koch
E. Koch
R. Koch
OOD
44
8
0
07 Dec 2020
Representation Based Complexity Measures for Predicting Generalization
  in Deep Learning
Representation Based Complexity Measures for Predicting Generalization in Deep Learning
Parth Natekar
Manik Sharma
64
36
0
04 Dec 2020
Dissipative Deep Neural Dynamical Systems
Dissipative Deep Neural Dynamical Systems
Ján Drgoňa
Soumya Vasisht
Aaron Tuor
D. Vrabie
53
8
0
26 Nov 2020
Deep Empirical Risk Minimization in finance: looking into the future
Deep Empirical Risk Minimization in finance: looking into the future
A. M. Reppen
H. Soner
76
19
0
18 Nov 2020
An Information-Geometric Distance on the Space of Tasks
An Information-Geometric Distance on the Space of Tasks
Yansong Gao
Pratik Chaudhari
68
22
0
01 Nov 2020
The power of quantum neural networks
The power of quantum neural networks
Amira Abbas
David Sutter
Christa Zoufal
Aurelien Lucchi
Alessio Figalli
Stefan Woerner
149
768
0
30 Oct 2020
Why Do Better Loss Functions Lead to Less Transferable Features?
Why Do Better Loss Functions Lead to Less Transferable Features?
Simon Kornblith
Ting-Li Chen
Honglak Lee
Mohammad Norouzi
FaML
121
92
0
30 Oct 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
79
29
0
22 Oct 2020
Failures of model-dependent generalization bounds for least-norm
  interpolation
Failures of model-dependent generalization bounds for least-norm interpolation
Peter L. Bartlett
Philip M. Long
183
29
0
16 Oct 2020
The Deep Bootstrap Framework: Good Online Learners are Good Offline
  Generalizers
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran
Behnam Neyshabur
Hanie Sedghi
OffRL
97
11
0
16 Oct 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
SungSoo Ahn
Jinwoo Shin
93
206
0
15 Oct 2020
How does Weight Correlation Affect the Generalisation Ability of Deep
  Neural Networks
How does Weight Correlation Affect the Generalisation Ability of Deep Neural Networks
Gao Jin
Xinping Yi
Liang Zhang
Lijun Zhang
S. Schewe
Xiaowei Huang
83
42
0
12 Oct 2020
Improve the Robustness and Accuracy of Deep Neural Network with
  $L_{2,\infty}$ Normalization
Improve the Robustness and Accuracy of Deep Neural Network with L2,∞L_{2,\infty}L2,∞​ Normalization
Lijia Yu
Xiao-Shan Gao
15
0
0
10 Oct 2020
How Does Mixup Help With Robustness and Generalization?
How Does Mixup Help With Robustness and Generalization?
Linjun Zhang
Zhun Deng
Kenji Kawaguchi
Amirata Ghorbani
James Zou
AAML
108
252
0
09 Oct 2020
Subspace Embeddings Under Nonlinear Transformations
Subspace Embeddings Under Nonlinear Transformations
Aarshvi Gajjar
Cameron Musco
57
5
0
05 Oct 2020
The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks
The Efficacy of L1L_1L1​ Regularization in Two-Layer Neural Networks
Gen Li
Yuantao Gu
Jie Ding
52
7
0
02 Oct 2020
Normalization Techniques in Training DNNs: Methodology, Analysis and
  Application
Normalization Techniques in Training DNNs: Methodology, Analysis and Application
Lei Huang
Jie Qin
Yi Zhou
Fan Zhu
Li Liu
Ling Shao
AI4CE
176
273
0
27 Sep 2020
Learning Optimal Representations with the Decodable Information
  Bottleneck
Learning Optimal Representations with the Decodable Information Bottleneck
Yann Dubois
Douwe Kiela
D. Schwab
Ramakrishna Vedantam
122
43
0
27 Sep 2020
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
84
240
0
18 Sep 2020
Risk Bounds for Robust Deep Learning
Risk Bounds for Robust Deep Learning
Johannes Lederer
OOD
53
16
0
14 Sep 2020
Complexity Measures for Neural Networks with General Activation
  Functions Using Path-based Norms
Complexity Measures for Neural Networks with General Activation Functions Using Path-based Norms
Zhong Li
Chao Ma
Lei Wu
63
24
0
14 Sep 2020
Extreme Memorization via Scale of Initialization
Extreme Memorization via Scale of Initialization
Harsh Mehta
Ashok Cutkosky
Behnam Neyshabur
60
20
0
31 Aug 2020
A Functional Perspective on Learning Symmetric Functions with Neural
  Networks
A Functional Perspective on Learning Symmetric Functions with Neural Networks
Aaron Zweig
Joan Bruna
95
22
0
16 Aug 2020
Improve Generalization and Robustness of Neural Networks via Weight
  Scale Shifting Invariant Regularizations
Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations
Ziquan Liu
Yufei Cui
Antoni B. Chan
122
13
0
07 Aug 2020
Neural Complexity Measures
Neural Complexity Measures
Yoonho Lee
Juho Lee
Sung Ju Hwang
Eunho Yang
Seungjin Choi
85
9
0
07 Aug 2020
Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network
  Based Vector-to-Vector Regression
Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network Based Vector-to-Vector Regression
Jun Qi
Jun Du
Sabato Marco Siniscalchi
Xiaoli Ma
Chin-Hui Lee
116
42
0
04 Aug 2020
Convexifying Sparse Interpolation with Infinitely Wide Neural Networks:
  An Atomic Norm Approach
Convexifying Sparse Interpolation with Infinitely Wide Neural Networks: An Atomic Norm Approach
Akshay Kumar
Jarvis Haupt
43
0
0
15 Jul 2020
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing
  Kernel Krein Space and Indefinite Support Vector Machines
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing Kernel Krein Space and Indefinite Support Vector Machines
A. Shilton
Sunil Gupta
Santu Rana
Svetha Venkatesh
31
0
0
15 Jul 2020
Previous
123456789
Next