ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05296
  4. Cited By
Stronger generalization bounds for deep nets via a compression approach

Stronger generalization bounds for deep nets via a compression approach

14 February 2018
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
    MLT
    AI4CE
ArXivPDFHTML

Papers citing "Stronger generalization bounds for deep nets via a compression approach"

50 / 440 papers shown
Title
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled
  Data
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Colin Wei
Kendrick Shen
Yining Chen
Tengyu Ma
SSL
23
226
0
07 Oct 2020
Improved generalization by noise enhancement
Improved generalization by noise enhancement
Takashi Mori
Masahito Ueda
24
3
0
28 Sep 2020
Learning Optimal Representations with the Decodable Information
  Bottleneck
Learning Optimal Representations with the Decodable Information Bottleneck
Yann Dubois
Douwe Kiela
D. Schwab
Ramakrishna Vedantam
31
43
0
27 Sep 2020
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search
Kengo Machida
Kuniaki Uto
Koichi Shinoda
Taiji Suzuki
17
0
0
19 Sep 2020
Achieving Adversarial Robustness via Sparsity
Achieving Adversarial Robustness via Sparsity
Shu-Fan Wang
Ningyi Liao
Liyao Xiang
Nanyang Ye
Quanshi Zhang
AAML
14
14
0
11 Sep 2020
On Computability, Learnability and Extractability of Finite State
  Machines from Recurrent Neural Networks
On Computability, Learnability and Extractability of Finite State Machines from Recurrent Neural Networks
Reda Marzouk
12
2
0
10 Sep 2020
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
M. Longo
Suman Mishra
T. Konstantin Rusch
Christoph Schwab
38
20
0
06 Sep 2020
HALO: Learning to Prune Neural Networks with Shrinkage
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
24
0
0
24 Aug 2020
Generalization Guarantees for Imitation Learning
Generalization Guarantees for Imitation Learning
Allen Z. Ren
Sushant Veer
Anirudha Majumdar
11
1
0
05 Aug 2020
Making Coherence Out of Nothing At All: Measuring the Evolution of
  Gradient Alignment
Making Coherence Out of Nothing At All: Measuring the Evolution of Gradient Alignment
S. Chatterjee
Piotr Zielinski
10
8
0
03 Aug 2020
Generalization Comparison of Deep Neural Networks via Output Sensitivity
Generalization Comparison of Deep Neural Networks via Output Sensitivity
Mahsa Forouzesh
Farnood Salehi
Patrick Thiran
14
8
0
30 Jul 2020
Depth separation for reduced deep networks in nonlinear model reduction:
  Distilling shock waves in nonlinear hyperbolic problems
Depth separation for reduced deep networks in nonlinear model reduction: Distilling shock waves in nonlinear hyperbolic problems
Donsub Rim
Luca Venturi
Joan Bruna
Benjamin Peherstorfer
36
9
0
28 Jul 2020
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing
  Kernel Krein Space and Indefinite Support Vector Machines
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing Kernel Krein Space and Indefinite Support Vector Machines
A. Shilton
Sunil Gupta
Santu Rana
Svetha Venkatesh
19
0
0
15 Jul 2020
Generalization bound of globally optimal non-convex neural network
  training: Transportation map estimation by infinite dimensional Langevin
  dynamics
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
Taiji Suzuki
27
21
0
11 Jul 2020
Meta-Learning with Network Pruning
Meta-Learning with Network Pruning
Hongduan Tian
Bo Liu
Xiaotong Yuan
Qingshan Liu
27
27
0
07 Jul 2020
Are Labels Always Necessary for Classifier Accuracy Evaluation?
Are Labels Always Necessary for Classifier Accuracy Evaluation?
Weijian Deng
Liang Zheng
29
114
0
06 Jul 2020
DessiLBI: Exploring Structural Sparsity of Deep Networks via
  Differential Inclusion Paths
DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths
Yanwei Fu
Chen Liu
Donghao Li
Xinwei Sun
Jinshan Zeng
Yuan Yao
19
9
0
04 Jul 2020
Self-supervised Neural Architecture Search
Self-supervised Neural Architecture Search
Sapir Kaplan
Raja Giryes
SSL
30
12
0
03 Jul 2020
A Revision of Neural Tangent Kernel-based Approaches for Neural Networks
Kyungsu Kim
A. Lozano
Eunho Yang
AAML
40
0
0
02 Jul 2020
Estimates on the generalization error of Physics Informed Neural
  Networks (PINNs) for approximating PDEs
Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs
Siddhartha Mishra
Roberto Molinaro
PINN
30
171
0
29 Jun 2020
Is SGD a Bayesian sampler? Well, almost
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
23
51
0
26 Jun 2020
Continual Learning from the Perspective of Compression
Continual Learning from the Perspective of Compression
Xu He
Min Lin
CLL
14
3
0
26 Jun 2020
A Limitation of the PAC-Bayes Framework
A Limitation of the PAC-Bayes Framework
Roi Livni
Shay Moran
17
23
0
24 Jun 2020
Revisiting minimum description length complexity in overparameterized
  models
Revisiting minimum description length complexity in overparameterized models
Raaz Dwivedi
Chandan Singh
Bin Yu
Martin J. Wainwright
14
4
0
17 Jun 2020
Using Wavelets and Spectral Methods to Study Patterns in
  Image-Classification Datasets
Using Wavelets and Spectral Methods to Study Patterns in Image-Classification Datasets
Roozbeh Yousefzadeh
Furong Huang
19
6
0
17 Jun 2020
Optimization and Generalization Analysis of Transduction through
  Gradient Boosting and Application to Multi-scale Graph Neural Networks
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
Kenta Oono
Taiji Suzuki
AI4CE
37
31
0
15 Jun 2020
Tangent Space Sensitivity and Distribution of Linear Regions in ReLU
  Networks
Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks
Balint Daroczy
AAML
6
0
0
11 Jun 2020
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Jun Shu
Qian Zhao
Zengben Xu
Deyu Meng
NoLa
33
29
0
10 Jun 2020
Training with Multi-Layer Embeddings for Model Reduction
Training with Multi-Layer Embeddings for Model Reduction
Benjamin Ghaemmaghami
Zihao Deng
B. Cho
Leo Orshansky
A. Singh
M. Erez
Michael Orshansky
3DV
17
10
0
10 Jun 2020
Exploring the Vulnerability of Deep Neural Networks: A Study of
  Parameter Corruption
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
30
39
0
10 Jun 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
39
631
0
09 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
24
37
0
01 Jun 2020
Enhancing accuracy of deep learning algorithms by training with
  low-discrepancy sequences
Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences
Siddhartha Mishra
T. Konstantin Rusch
27
49
0
26 May 2020
Is deeper better? It depends on locality of relevant features
Is deeper better? It depends on locality of relevant features
Takashi Mori
Masahito Ueda
OOD
25
4
0
26 May 2020
Learning the gravitational force law and other analytic functions
Learning the gravitational force law and other analytic functions
Atish Agarwala
Abhimanyu Das
Rina Panigrahy
Qiuyi Zhang
MLT
16
0
0
15 May 2020
Computing the Testing Error without a Testing Set
Computing the Testing Error without a Testing Set
C. Corneanu
Meysam Madadi
Sergio Escalera
Aleix M. Martinez
AAML
10
69
0
01 May 2020
Local Lipschitz Bounds of Deep Neural Networks
Local Lipschitz Bounds of Deep Neural Networks
Calypso Herrera
Florian Krach
Josef Teichmann
19
3
0
27 Apr 2020
Evolution of Q Values for Deep Q Learning in Stable Baselines
Evolution of Q Values for Deep Q Learning in Stable Baselines
M. Andrews
Cemil Dibek
Karina Palyutina
11
3
0
24 Apr 2020
Deep Networks as Logical Circuits: Generalization and Interpretation
Deep Networks as Logical Circuits: Generalization and Interpretation
Christopher Snyder
S. Vishwanath
FAtt
AI4CE
11
2
0
25 Mar 2020
Hyperplane Arrangements of Trained ConvNets Are Biased
Hyperplane Arrangements of Trained ConvNets Are Biased
Matteo Gamba
S. Carlsson
Hossein Azizpour
Mårten Björkman
29
5
0
17 Mar 2020
Weak and Strong Gradient Directions: Explaining Memorization,
  Generalization, and Hardness of Examples at Scale
Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale
Piotr Zielinski
Shankar Krishnan
S. Chatterjee
ODL
11
2
0
16 Mar 2020
Invariant Causal Prediction for Block MDPs
Invariant Causal Prediction for Block MDPs
Amy Zhang
Clare Lyle
Shagun Sodhani
Angelos Filos
Marta Z. Kwiatkowska
Joelle Pineau
Y. Gal
Doina Precup
OffRL
AI4CE
OOD
37
139
0
12 Mar 2020
Analyzing Visual Representations in Embodied Navigation Tasks
Analyzing Visual Representations in Embodied Navigation Tasks
Erik Wijmans
Julian Straub
Dhruv Batra
Irfan Essa
Judy Hoffman
Ari S. Morcos
19
2
0
12 Mar 2020
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate
  Schedule
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule
Nikhil Iyer
V. Thejas
Nipun Kwatra
Ramachandran Ramjee
Muthian Sivathanu
8
28
0
09 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
235
383
0
05 Mar 2020
Towards Probability-based Safety Verification of Systems with Components
  from Machine Learning
Towards Probability-based Safety Verification of Systems with Components from Machine Learning
H. Kaindl
Stefan Kramer
25
1
0
02 Mar 2020
The Implicit and Explicit Regularization Effects of Dropout
The Implicit and Explicit Regularization Effects of Dropout
Colin Wei
Sham Kakade
Tengyu Ma
30
114
0
28 Feb 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
22
148
0
26 Feb 2020
Coherent Gradients: An Approach to Understanding Generalization in
  Gradient Descent-based Optimization
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
S. Chatterjee
ODL
OOD
11
51
0
25 Feb 2020
De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and
  Non-smooth Predictors
De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and Non-smooth Predictors
A. Banerjee
Tiancong Chen
Yingxue Zhou
BDL
22
8
0
23 Feb 2020
Previous
123456789
Next