ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.13344
  4. Cited By
Deterministic PAC-Bayesian generalization bounds for deep networks via
  generalizing noise-resilience

Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience

30 May 2019
Vaishnavh Nagarajan
J. Zico Kolter
ArXivPDFHTML

Papers citing "Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience"

26 / 26 papers shown
Title
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
116
0
0
20 Feb 2025
On the Role of Noise in the Sample Complexity of Learning Recurrent
  Neural Networks: Exponential Gaps for Long Sequences
On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences
A. F. Pour
H. Ashtiani
27
0
0
28 May 2023
Subspace-Configurable Networks
Subspace-Configurable Networks
Dong Wang
O. Saukh
Xiaoxi He
Lothar Thiele
OOD
40
0
0
22 May 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Mårten Björkman
33
7
0
28 Jan 2023
PAC-Bayes Compression Bounds So Tight That They Can Explain
  Generalization
PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Sanae Lotfi
Marc Finzi
Sanyam Kapoor
Andres Potapczynski
Micah Goldblum
A. Wilson
BDL
MLT
AI4CE
29
54
0
24 Nov 2022
Do highly over-parameterized neural networks generalize since bad
  solutions are rare?
Do highly over-parameterized neural networks generalize since bad solutions are rare?
Julius Martinetz
T. Martinetz
32
1
0
07 Nov 2022
Generalisation under gradient descent via deterministic PAC-Bayes
Generalisation under gradient descent via deterministic PAC-Bayes
Eugenio Clerico
Tyler Farghly
George Deligiannidis
Benjamin Guedj
Arnaud Doucet
33
4
0
06 Sep 2022
Integral Probability Metrics PAC-Bayes Bounds
Integral Probability Metrics PAC-Bayes Bounds
Ron Amit
Baruch Epstein
Shay Moran
Ron Meir
34
18
0
01 Jul 2022
Benefits of Additive Noise in Composing Classes with Bounded Capacity
Benefits of Additive Noise in Composing Classes with Bounded Capacity
A. F. Pour
H. Ashtiani
35
3
0
14 Jun 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
34
13
0
26 Feb 2022
On PAC-Bayesian reconstruction guarantees for VAEs
On PAC-Bayesian reconstruction guarantees for VAEs
Badr-Eddine Chérief-Abdellatif
Yuyang Shi
Arnaud Doucet
Benjamin Guedj
DRL
53
17
0
23 Feb 2022
Non-Vacuous Generalisation Bounds for Shallow Neural Networks
Non-Vacuous Generalisation Bounds for Shallow Neural Networks
Felix Biggs
Benjamin Guedj
BDL
30
26
0
03 Feb 2022
Improved Regularization and Robustness for Fine-tuning in Neural
  Networks
Improved Regularization and Robustness for Fine-tuning in Neural Networks
Dongyue Li
Hongyang R. Zhang
NoLa
55
56
0
08 Nov 2021
Perturbated Gradients Updating within Unit Space for Deep Learning
Perturbated Gradients Updating within Unit Space for Deep Learning
Ching-Hsun Tseng
Liu Cheng
Shin-Jye Lee
Xiaojun Zeng
50
5
0
01 Oct 2021
Ridgeless Interpolation with Shallow ReLU Networks in $1D$ is Nearest
  Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz
  Functions
Ridgeless Interpolation with Shallow ReLU Networks in 1D1D1D is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions
Boris Hanin
MLT
40
9
0
27 Sep 2021
RATT: Leveraging Unlabeled Data to Guarantee Generalization
RATT: Leveraging Unlabeled Data to Guarantee Generalization
Saurabh Garg
Sivaraman Balakrishnan
J. Zico Kolter
Zachary Chase Lipton
32
30
0
01 May 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
42
71
0
04 Mar 2021
On the role of data in PAC-Bayes bounds
On the role of data in PAC-Bayes bounds
Gintare Karolina Dziugaite
Kyle Hsu
W. Gharbieh
Gabriel Arpino
Daniel M. Roy
14
76
0
19 Jun 2020
Optimization and Generalization Analysis of Transduction through
  Gradient Boosting and Application to Multi-scale Graph Neural Networks
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
Kenta Oono
Taiji Suzuki
AI4CE
37
31
0
15 Jun 2020
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAML
OOD
38
85
0
09 Oct 2019
Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization
Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization
Xinyan Li
Qilong Gu
Yingxue Zhou
Tiancong Chen
A. Banerjee
ODL
42
51
0
24 Jul 2019
Generalization Guarantees for Neural Networks via Harnessing the
  Low-rank Structure of the Jacobian
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
21
88
0
12 Jun 2019
Norm-based generalisation bounds for multi-class convolutional neural
  networks
Norm-based generalisation bounds for multi-class convolutional neural networks
Antoine Ledent
Waleed Mustafa
Yunwen Lei
Marius Kloft
18
5
0
29 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
25
109
0
09 May 2019
Uniform convergence may be unable to explain generalization in deep
  learning
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMe
AI4CE
17
310
0
13 Feb 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
1