ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.03684
  4. Cited By
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation

Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation

9 May 2019
Colin Wei
Tengyu Ma
ArXivPDFHTML

Papers citing "Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation"

27 / 27 papers shown
Title
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
107
0
0
20 Feb 2025
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
AI4CE
91
2
0
08 Jul 2024
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To
  Achieve Better Generalization
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Kaiyue Wen
Zhiyuan Li
Tengyu Ma
FAtt
38
26
0
20 Jul 2023
Demystifying Causal Features on Adversarial Examples and Causal
  Inoculation for Robust Network by Adversarial Instrumental Variable
  Regression
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression
Junho Kim
Byung-Kwan Lee
Yonghyun Ro
CML
AAML
20
18
0
02 Mar 2023
Koopman-based generalization bound: New aspect for full-rank weights
Koopman-based generalization bound: New aspect for full-rank weights
Yuka Hashimoto
Sho Sonoda
Isao Ishikawa
Atsushi Nitanda
Taiji Suzuki
11
2
0
12 Feb 2023
Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on
  Graph Diffusion
Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on Graph Diffusion
Haotian Ju
Dongyue Li
Aneesh Sharma
Hongyang R. Zhang
31
40
0
09 Feb 2023
On the Lipschitz Constant of Deep Networks and Double Descent
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
28
7
0
28 Jan 2023
PAC-Bayesian-Like Error Bound for a Class of Linear Time-Invariant
  Stochastic State-Space Models
PAC-Bayesian-Like Error Bound for a Class of Linear Time-Invariant Stochastic State-Space Models
Deividas Eringis
J. Leth
Zheng-Hua Tan
Rafal Wisniewski
M. Petreczky
32
1
0
30 Dec 2022
Do highly over-parameterized neural networks generalize since bad
  solutions are rare?
Do highly over-parameterized neural networks generalize since bad solutions are rare?
Julius Martinetz
T. Martinetz
27
1
0
07 Nov 2022
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for
  Language Models
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Hong Liu
Sang Michael Xie
Zhiyuan Li
Tengyu Ma
AI4CE
40
49
0
25 Oct 2022
MaxMatch: Semi-Supervised Learning with Worst-Case Consistency
MaxMatch: Semi-Supervised Learning with Worst-Case Consistency
Yangbangyan Jiang Xiaodan Li
Xiaodan Li
YueFeng Chen
Yuan He
Qianqian Xu
Zhiyong Yang
Xiaochun Cao
Qingming Huang
19
18
0
26 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
Integral Probability Metrics PAC-Bayes Bounds
Integral Probability Metrics PAC-Bayes Bounds
Ron Amit
Baruch Epstein
Shay Moran
Ron Meir
27
18
0
01 Jul 2022
Adversarial robustness of sparse local Lipschitz predictors
Adversarial robustness of sparse local Lipschitz predictors
Ramchandran Muthukumar
Jeremias Sulam
AAML
32
13
0
26 Feb 2022
Improved Regularization and Robustness for Fine-tuning in Neural
  Networks
Improved Regularization and Robustness for Fine-tuning in Neural Networks
Dongyue Li
Hongyang R. Zhang
NoLa
52
54
0
08 Nov 2021
Perturbated Gradients Updating within Unit Space for Deep Learning
Perturbated Gradients Updating within Unit Space for Deep Learning
Ching-Hsun Tseng
Liu Cheng
Shin-Jye Lee
Xiaojun Zeng
40
5
0
01 Oct 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
15
32
0
12 Apr 2021
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for
  Improved Generalization
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
Sang Michael Xie
Tengyu Ma
Percy Liang
30
13
0
29 Jun 2020
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Jeff Z. HaoChen
Colin Wei
J. Lee
Tengyu Ma
29
93
0
15 Jun 2020
Optimization and Generalization Analysis of Transduction through
  Gradient Boosting and Application to Multi-scale Graph Neural Networks
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
Kenta Oono
Taiji Suzuki
AI4CE
37
31
0
15 Jun 2020
In Defense of Uniform Convergence: Generalization via derandomization
  with an application to interpolating predictors
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
AI4CE
32
64
0
09 Dec 2019
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
A. Yaguchi
Taiji Suzuki
Shuhei Nitta
Y. Sakata
A. Tanizawa
19
9
0
29 Oct 2019
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAML
OOD
36
85
0
09 Oct 2019
Generalization bounds for deep convolutional neural networks
Generalization bounds for deep convolutional neural networks
Philip M. Long
Hanie Sedghi
MLT
37
89
0
29 May 2019
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
20
243
0
12 Oct 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
287
2,890
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1