ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.08498
  4. Cited By
Spectrally-normalized margin bounds for neural networks

Spectrally-normalized margin bounds for neural networks

26 June 2017
Peter L. Bartlett
Dylan J. Foster
Matus Telgarsky
    ODL
ArXivPDFHTML

Papers citing "Spectrally-normalized margin bounds for neural networks"

50 / 804 papers shown
Title
Norm-based generalisation bounds for multi-class convolutional neural
  networks
Norm-based generalisation bounds for multi-class convolutional neural networks
Antoine Ledent
Waleed Mustafa
Yunwen Lei
Marius Kloft
18
5
0
29 May 2019
Implicit Rugosity Regularization via Data Augmentation
Implicit Rugosity Regularization via Data Augmentation
Daniel LeJeune
Randall Balestriero
Hamid Javadi
Richard G. Baraniuk
10
4
0
28 May 2019
SGD on Neural Networks Learns Functions of Increasing Complexity
SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran
Gal Kaplun
Dimitris Kalimeris
Tristan Yang
Benjamin L. Edelman
Fred Zhang
Boaz Barak
MLT
14
235
0
28 May 2019
Quantifying the generalization error in deep learning in terms of data
  distribution and neural network smoothness
Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
Pengzhan Jin
Lu Lu
Yifa Tang
George Karniadakis
9
60
0
27 May 2019
Fast Convergence of Natural Gradient Descent for Overparameterized
  Neural Networks
Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks
Guodong Zhang
James Martens
Roger C. Grosse
ODL
22
124
0
27 May 2019
Nonparametric Online Learning Using Lipschitz Regularized Deep Neural
  Networks
Nonparametric Online Learning Using Lipschitz Regularized Deep Neural Networks
Guy Uziel
BDL
9
0
0
26 May 2019
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer
  Neural Networks
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
MLT
AI4CE
31
38
0
24 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural
  Networks on Classification Problems
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
16
33
0
23 May 2019
How degenerate is the parametrization of neural networks with the ReLU
  activation function?
How degenerate is the parametrization of neural networks with the ReLU activation function?
Julius Berner
Dennis Elbrächter
Philipp Grohs
ODL
30
28
0
23 May 2019
The role of invariance in spectral complexity-based generalization
  bounds
The role of invariance in spectral complexity-based generalization bounds
Konstantinos Pitas
Andreas Loukas
Mike Davies
P. Vandergheynst
BDL
6
1
0
23 May 2019
Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces
Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces
Yanwei Fu
Chen Liu
Donghao Li
Zuyuan Zhong
Xinwei Sun
Jinshan Zeng
Yuan Yao
33
10
0
23 May 2019
Fine-grained Optimization of Deep Neural Networks
Fine-grained Optimization of Deep Neural Networks
Mete Ozay
ODL
14
1
0
22 May 2019
Revisiting hard thresholding for DNN pruning
Revisiting hard thresholding for DNN pruning
Konstantinos Pitas
Mike Davies
P. Vandergheynst
AAML
15
2
0
21 May 2019
Orthogonal Deep Neural Networks
Orthogonal Deep Neural Networks
Kui Jia
Shuai Li
Yuxin Wen
Tongliang Liu
Dacheng Tao
34
132
0
15 May 2019
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Ernest K. Ryu
Jialin Liu
Sicheng Wang
Xiaohan Chen
Zhangyang Wang
W. Yin
AI4CE
22
347
0
14 May 2019
The Effect of Network Width on Stochastic Gradient Descent and
  Generalization: an Empirical Study
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study
Daniel S. Park
Jascha Narain Sohl-Dickstein
Quoc V. Le
Samuel L. Smith
27
57
0
09 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
25
109
0
09 May 2019
Defensive Quantization: When Efficiency Meets Robustness
Defensive Quantization: When Efficiency Meets Robustness
Ji Lin
Chuang Gan
Song Han
MQ
39
202
0
17 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
36
136
0
10 Apr 2019
Fast Spatio-Temporal Residual Network for Video Super-Resolution
Fast Spatio-Temporal Residual Network for Video Super-Resolution
Sheng Li
Fengxiang He
Bo Du
Lefei Zhang
Yonghao Xu
Dacheng Tao
SupR
26
133
0
05 Apr 2019
Why ResNet Works? Residuals Generalize
Why ResNet Works? Residuals Generalize
Fengxiang He
Tongliang Liu
Dacheng Tao
16
243
0
02 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
A Control Lyapunov Perspective on Episodic Learning via Projection to
  State Stability
A Control Lyapunov Perspective on Episodic Learning via Projection to State Stability
Andrew J. Taylor
Victor D. Dorobantu
M. Krishnamoorthy
Hoang Minh Le
Yisong Yue
Aaron D. Ames
6
13
0
18 Mar 2019
Theory III: Dynamics and Generalization in Deep Networks
Theory III: Dynamics and Generalization in Deep Networks
Andrzej Banburski
Q. Liao
Brando Miranda
Lorenzo Rosasco
Fernanda De La Torre
Jack Hidary
T. Poggio
AI4CE
32
3
0
12 Mar 2019
Limiting Network Size within Finite Bounds for Optimization
Limiting Network Size within Finite Bounds for Optimization
Linu Pinto
Sasi Gopalan
19
2
0
07 Mar 2019
A Priori Estimates of the Population Risk for Residual Networks
A Priori Estimates of the Population Risk for Residual Networks
E. Weinan
Chao Ma
Qingcan Wang
UQCV
37
61
0
06 Mar 2019
Implicit Regularization in Over-parameterized Neural Networks
Implicit Regularization in Over-parameterized Neural Networks
M. Kubo
Ryotaro Banno
Hidetaka Manabe
Masataka Minoji
17
23
0
05 Mar 2019
Statistical Guarantees for the Robustness of Bayesian Neural Networks
Statistical Guarantees for the Robustness of Bayesian Neural Networks
L. Cardelli
Marta Kwiatkowska
Luca Laurenti
Nicola Paoletti
A. Patané
Matthew Wicker
AAML
31
54
0
05 Mar 2019
Shallow Neural Networks for Fluid Flow Reconstruction with Limited
  Sensors
Shallow Neural Networks for Fluid Flow Reconstruction with Limited Sensors
N. Benjamin Erichson
L. Mathelin
Z. Yao
Steven L. Brunton
Michael W. Mahoney
J. Nathan Kutz
AI4CE
24
34
0
20 Feb 2019
Uniform convergence may be unable to explain generalization in deep
  learning
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMe
AI4CE
17
309
0
13 Feb 2019
Identity Crisis: Memorization and Generalization under Extreme
  Overparameterization
Identity Crisis: Memorization and Generalization under Extreme Overparameterization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Michael C. Mozer
Y. Singer
14
87
0
13 Feb 2019
Towards moderate overparameterization: global convergence guarantees for
  training shallow neural networks
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
19
319
0
12 Feb 2019
Are All Layers Created Equal?
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
20
140
0
06 Feb 2019
Generalization Bounds For Unsupervised and Semi-Supervised Learning With
  Autoencoders
Generalization Bounds For Unsupervised and Semi-Supervised Learning With Autoencoders
Baruch Epstein
Ron Meir
SSL
DRL
AI4CE
8
16
0
04 Feb 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
25
155
0
04 Feb 2019
An Empirical Study on Regularization of Deep Neural Networks by Local
  Rademacher Complexity
An Empirical Study on Regularization of Deep Neural Networks by Local Rademacher Complexity
Yingzhen Yang
Jiahui Yu
Xingjian Li
Jun Huan
Thomas S. Huang
AI4CE
17
5
0
03 Feb 2019
Complexity, Statistical Risk, and Metric Entropy of Deep Nets Using
  Total Path Variation
Complexity, Statistical Risk, and Metric Entropy of Deep Nets Using Total Path Variation
Andrew R. Barron
Jason M. Klusowski
57
30
0
02 Feb 2019
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Haowei He
Gao Huang
Yang Yuan
ODL
MLT
28
147
0
02 Feb 2019
On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex
  Learning
On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning
Jian Li
Xuanyuan Luo
Mingda Qiao
21
85
0
02 Feb 2019
Effect of Various Regularizers on Model Complexities of Neural Networks
  in Presence of Input Noise
Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise
Mayank Sharma
Aayush Yadav
Sumit Soman
Jayadeva Jayadeva
4
1
0
31 Jan 2019
Deep Learning for Inverse Problems: Bounds and Regularizers
Deep Learning for Inverse Problems: Bounds and Regularizers
Jaweria Amjad
Zhaoyang Lyu
M. Rodrigues
13
4
0
31 Jan 2019
Orthogonal Statistical Learning
Orthogonal Statistical Learning
Dylan J. Foster
Vasilis Syrgkanis
16
165
0
25 Jan 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
55
961
0
24 Jan 2019
Cross-Entropy Loss and Low-Rank Features Have Responsibility for
  Adversarial Examples
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
Kamil Nar
Orhan Ocal
S. Shankar Sastry
Kannan Ramchandran
AAML
27
54
0
24 Jan 2019
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very
  Large Pre-Trained Deep Neural Networks
Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Charles H. Martin
Michael W. Mahoney
13
55
0
24 Jan 2019
Understanding Geometry of Encoder-Decoder CNNs
Understanding Geometry of Encoder-Decoder CNNs
J. C. Ye
Woon Kyoung Sung
3DV
AI4CE
9
72
0
22 Jan 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural
  Networks
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Tao Luo
Yan Xiao
Zheng Ma
12
502
0
19 Jan 2019
Normalized Flat Minima: Exploring Scale Invariant Definition of Flat
  Minima for Neural Networks using PAC-Bayesian Analysis
Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
22
74
0
15 Jan 2019
Tightening Mutual Information Based Bounds on Generalization Error
Tightening Mutual Information Based Bounds on Generalization Error
Yuheng Bu
Shaofeng Zou
V. Veeravalli
14
169
0
15 Jan 2019
Previous
123...1314151617
Next