ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05296
  4. Cited By
Stronger generalization bounds for deep nets via a compression approach

Stronger generalization bounds for deep nets via a compression approach

14 February 2018
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
    MLT
    AI4CE
ArXivPDFHTML

Papers citing "Stronger generalization bounds for deep nets via a compression approach"

40 / 440 papers shown
Title
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
41
765
0
12 Nov 2018
Characterizing Well-Behaved vs. Pathological Deep Neural Networks
Characterizing Well-Behaved vs. Pathological Deep Neural Networks
Mitchell Stern
21
0
0
07 Nov 2018
Sample Compression, Support Vectors, and Generalization in Deep Learning
Sample Compression, Support Vectors, and Generalization in Deep Learning
Christopher Snyder
S. Vishwanath
MLT
22
5
0
05 Nov 2018
Rademacher Complexity for Adversarially Robust Generalization
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
27
257
0
29 Oct 2018
A Priori Estimates of the Population Risk for Two-layer Neural Networks
A Priori Estimates of the Population Risk for Two-layer Neural Networks
Weinan E
Chao Ma
Lei Wu
29
130
0
15 Oct 2018
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
Jason D. Lee
Qiang Liu
Tengyu Ma
28
245
0
12 Oct 2018
The Outer Product Structure of Neural Network Derivatives
The Outer Product Structure of Neural Network Derivatives
Craig Bakker
Michael J. Henry
Nathan Oken Hodas
9
3
0
09 Oct 2018
Detecting Memorization in ReLU Networks
Detecting Memorization in ReLU Networks
Edo Collins
Siavash Bigdeli
Sabine Süsstrunk
36
4
0
08 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
66
1,176
0
04 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
49
193
0
02 Oct 2018
NICE: Noise Injection and Clamping Estimation for Neural Network
  Quantization
NICE: Noise Injection and Clamping Estimation for Neural Network Quantization
Chaim Baskin
Natan Liss
Yoav Chai
Evgenii Zheltonozhskii
Eli Schwartz
Raja Giryes
A. Mendelson
A. Bronstein
MQ
17
60
0
29 Sep 2018
Predicting the Generalization Gap in Deep Networks with Margin
  Distributions
Predicting the Generalization Gap in Deep Networks with Margin Distributions
Yiding Jiang
Dilip Krishnan
H. Mobahi
Samy Bengio
UQCV
26
198
0
28 Sep 2018
An analytic theory of generalization dynamics and transfer learning in
  deep linear networks
An analytic theory of generalization dynamics and transfer learning in deep linear networks
Andrew Kyle Lampinen
Surya Ganguli
OOD
28
128
0
27 Sep 2018
On the Structural Sensitivity of Deep Convolutional Networks to the
  Directions of Fourier Basis Functions
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Yusuke Tsuzuku
Issei Sato
AAML
24
62
0
11 Sep 2018
Approximation and Estimation for High-Dimensional Deep Learning Networks
Approximation and Estimation for High-Dimensional Deep Learning Networks
Andrew R. Barron
Jason M. Klusowski
27
59
0
10 Sep 2018
Analysis of the Generalization Error: Empirical Risk Minimization over
  Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the
  Numerical Approximation of Black-Scholes Partial Differential Equations
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations
Julius Berner
Philipp Grohs
Arnulf Jentzen
14
181
0
09 Sep 2018
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis
  and its Generalization Error
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error
Taiji Suzuki
Hiroshi Abe
Tomoya Murata
Shingo Horiuchi
Kotaro Ito
Tokuma Wachi
So Hirai
Masatoshi Yukishima
Tomoaki Nishimura
MLT
27
10
0
26 Aug 2018
On the Decision Boundary of Deep Neural Networks
On the Decision Boundary of Deep Neural Networks
Yu Li
Lizhong Ding
Xin Gao
UQCV
16
35
0
16 Aug 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
34
651
0
03 Aug 2018
Generalization Error in Deep Learning
Generalization Error in Deep Learning
Daniel Jakubovitz
Raja Giryes
M. Rodrigues
AI4CE
32
109
0
03 Aug 2018
A Mean-Field Optimal Control Formulation of Deep Learning
A Mean-Field Optimal Control Formulation of Deep Learning
Weinan E
Jiequn Han
Qianxiao Li
OOD
14
183
0
03 Jul 2018
There Are Many Consistent Explanations of Unlabeled Data: Why You Should
  Average
There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average
Ben Athiwaratkun
Marc Finzi
Pavel Izmailov
A. Wilson
208
243
0
14 Jun 2018
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets,
  and Beyond
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
Xingguo Li
Junwei Lu
Zhaoran Wang
Jarvis Haupt
T. Zhao
27
78
0
13 Jun 2018
Efficient Full-Matrix Adaptive Regularization
Efficient Full-Matrix Adaptive Regularization
Naman Agarwal
Brian Bullins
Xinyi Chen
Elad Hazan
Karan Singh
Cyril Zhang
Yi Zhang
18
21
0
08 Jun 2018
Training Faster by Separating Modes of Variation in Batch-normalized
  Models
Training Faster by Separating Modes of Variation in Batch-normalized Models
Mahdi M. Kalayeh
M. Shah
27
42
0
07 Jun 2018
Minnorm training: an algorithm for training over-parameterized deep
  neural networks
Minnorm training: an algorithm for training over-parameterized deep neural networks
Yamini Bansal
Madhu S. Advani
David D. Cox
Andrew M. Saxe
ODL
15
18
0
03 Jun 2018
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Adam S. Charles
AAML
HAI
AI4CE
27
9
0
01 Jun 2018
Deep learning generalizes because the parameter-function map is biased
  towards simple functions
Deep learning generalizes because the parameter-function map is biased towards simple functions
Guillermo Valle Pérez
Chico Q. Camargo
A. Louis
MLT
AI4CE
18
226
0
22 May 2018
State-Denoised Recurrent Neural Networks
State-Denoised Recurrent Neural Networks
Michael C. Mozer
Denis Kazakov
Robert V. Lindsey
AI4TS
19
7
0
22 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent
  Neural Network?
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
26
57
0
21 May 2018
DNN or k-NN: That is the Generalize vs. Memorize Question
DNN or k-NN: That is the Generalize vs. Memorize Question
Gilad Cohen
Guillermo Sapiro
Raja Giryes
21
38
0
17 May 2018
Robustness via Deep Low-Rank Representations
Robustness via Deep Low-Rank Representations
Amartya Sanyal
Varun Kanade
Philip Torr
P. Dokania
OOD
27
16
0
19 Apr 2018
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian
  Compression Approach
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Wenda Zhou
Victor Veitch
Morgane Austern
Ryan P. Adams
Peter Orbanz
44
211
0
16 Apr 2018
Data-Dependent Coresets for Compressing Neural Networks with
  Applications to Generalization Bounds
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
25
79
0
15 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
70
3,394
0
09 Mar 2018
The Description Length of Deep Learning Models
The Description Length of Deep Learning Models
Léonard Blier
Yann Ollivier
32
97
0
20 Feb 2018
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate
  Modeling and Uncertainty Quantification
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification
Yinhao Zhu
N. Zabaras
UQCV
BDL
27
638
0
21 Jan 2018
Layer-wise Learning of Stochastic Neural Networks with Information
  Bottleneck
Layer-wise Learning of Stochastic Neural Networks with Information Bottleneck
Thanh T. Nguyen
Jaesik Choi
20
13
0
04 Dec 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
Previous
123456789