ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08760
  4. Cited By
Sensitivity and Generalization in Neural Networks: an Empirical Study

Sensitivity and Generalization in Neural Networks: an Empirical Study

23 February 2018
Roman Novak
Yasaman Bahri
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
    AAML
ArXivPDFHTML

Papers citing "Sensitivity and Generalization in Neural Networks: an Empirical Study"

43 / 93 papers shown
Title
Meta-Learning Requires Meta-Augmentation
Meta-Learning Requires Meta-Augmentation
Janarthanan Rajendran
A. Irpan
Eric Jang
24
93
0
10 Jul 2020
Learning Differential Equations that are Easy to Solve
Learning Differential Equations that are Easy to Solve
Jacob Kelly
J. Bettencourt
Matthew J. Johnson
David Duvenaud
32
111
0
09 Jul 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
19
18
0
29 Jun 2020
Riemannian Continuous Normalizing Flows
Riemannian Continuous Normalizing Flows
Emile Mathieu
Maximilian Nickel
AI4CE
27
119
0
18 Jun 2020
What Do Neural Networks Learn When Trained With Random Labels?
What Do Neural Networks Learn When Trained With Random Labels?
Hartmut Maennel
Ibrahim M. Alabdulmohsin
Ilya O. Tolstikhin
R. Baldock
Olivier Bousquet
Sylvain Gelly
Daniel Keysers
FedML
48
87
0
18 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Dataless Model Selection with the Deep Frame Potential
Dataless Model Selection with the Deep Frame Potential
Calvin Murdock
Simon Lucey
38
6
0
30 Mar 2020
Classifying the classifier: dissecting the weight space of neural
  networks
Classifying the classifier: dissecting the weight space of neural networks
Gabriel Eilertsen
Daniel Jonsson
Timo Ropinski
Jonas Unger
Anders Ynnerman
4
53
0
13 Feb 2020
How to train your neural ODE: the world of Jacobian and kinetic
  regularization
How to train your neural ODE: the world of Jacobian and kinetic regularization
Chris Finlay
J. Jacobsen
L. Nurbekyan
Adam M. Oberman
11
296
0
07 Feb 2020
Optimized Generic Feature Learning for Few-shot Classification across
  Domains
Optimized Generic Feature Learning for Few-shot Classification across Domains
Tonmoy Saikia
Thomas Brox
Cordelia Schmid
VLM
30
48
0
22 Jan 2020
Empirical Studies on the Properties of Linear Regions in Deep Neural
  Networks
Empirical Studies on the Properties of Linear Regions in Deep Neural Networks
Xiao Zhang
Dongrui Wu
21
38
0
04 Jan 2020
Mining Domain Knowledge: Improved Framework towards Automatically
  Standardizing Anatomical Structure Nomenclature in Radiotherapy
Mining Domain Knowledge: Improved Framework towards Automatically Standardizing Anatomical Structure Nomenclature in Radiotherapy
Qiming Yang
H. Chao
D. Nguyen
Steve B. Jiang
6
7
0
04 Dec 2019
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any
  Architecture are Gaussian Processes
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang
33
193
0
28 Oct 2019
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAML
OOD
36
85
0
09 Oct 2019
Needles in Haystacks: On Classifying Tiny Objects in Large Images
Needles in Haystacks: On Classifying Tiny Objects in Large Images
Nick Pawlowski
Suvrat Bhooshan
Nicolas Ballas
F. Ciompi
Ben Glocker
M. Drozdzal
24
22
0
16 Aug 2019
Dimensionality compression and expansion in Deep Neural Networks
Dimensionality compression and expansion in Deep Neural Networks
Stefano Recanatesi
M. Farrell
Madhu S. Advani
Timothy Moore
Guillaume Lajoie
E. Shea-Brown
26
72
0
02 Jun 2019
On Network Design Spaces for Visual Recognition
On Network Design Spaces for Visual Recognition
Ilija Radosavovic
Justin Johnson
Saining Xie
Wan-Yen Lo
Piotr Dollár
25
134
0
30 May 2019
Heterogeneous causal effects with imperfect compliance: a Bayesian
  machine learning approach
Heterogeneous causal effects with imperfect compliance: a Bayesian machine learning approach
Falco J. Bargagli-Stoffi
Kristof De-Witte
G. Gnecco
16
15
0
29 May 2019
SGD on Neural Networks Learns Functions of Increasing Complexity
SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran
Gal Kaplun
Dimitris Kalimeris
Tristan Yang
Benjamin L. Edelman
Fred Zhang
Boaz Barak
MLT
14
236
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Minimal Achievable Sufficient Statistic Learning
Minimal Achievable Sufficient Statistic Learning
Milan Cvitkovic
Günther Koliander
25
12
0
19 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
25
109
0
09 May 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
34
1,077
0
18 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
37
307
0
15 Feb 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
21
94
0
28 Jan 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural
  Networks
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Tao Luo
Yan Xiao
Zheng Ma
17
503
0
19 Jan 2019
Three Mechanisms of Weight Decay Regularization
Three Mechanisms of Weight Decay Regularization
Guodong Zhang
Chaoqi Wang
Bowen Xu
Roger C. Grosse
11
256
0
29 Oct 2018
Bayesian Deep Convolutional Networks with Many Channels are Gaussian
  Processes
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
Roman Novak
Lechao Xiao
Jaehoon Lee
Yasaman Bahri
Greg Yang
Jiri Hron
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
25
307
0
11 Oct 2018
Detecting Memorization in ReLU Networks
Detecting Memorization in ReLU Networks
Edo Collins
Siavash Bigdeli
Sabine Süsstrunk
36
4
0
08 Oct 2018
Why do Larger Models Generalize Better? A Theoretical Perspective via
  the XOR Problem
Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem
Alon Brutzkus
Amir Globerson
MLT
16
7
0
06 Oct 2018
Choose Your Neuron: Incorporating Domain Knowledge through
  Neuron-Importance
Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance
Ramprasaath R. Selvaraju
Prithvijit Chattopadhyay
Mohamed Elhoseiny
Tilak Sharma
Dhruv Batra
Devi Parikh
Stefan Lee
38
35
0
08 Aug 2018
Generalization Error in Deep Learning
Generalization Error in Deep Learning
Daniel Jakubovitz
Raja Giryes
M. Rodrigues
AI4CE
32
109
0
03 Aug 2018
PCA of high dimensional random walks with comparison to neural network
  training
PCA of high dimensional random walks with comparison to neural network training
J. Antognini
Jascha Narain Sohl-Dickstein
OOD
24
27
0
22 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
57
1,395
0
22 Jun 2018
Data augmentation instead of explicit regularization
Data augmentation instead of explicit regularization
Alex Hernández-García
Peter König
30
141
0
11 Jun 2018
A Simple Cache Model for Image Recognition
A Simple Cache Model for Image Recognition
Emin Orhan
VLM
22
30
0
21 May 2018
Learning Representations for Neural Network-Based Classification Using
  the Information Bottleneck Principle
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle
Rana Ali Amjad
Bernhard C. Geiger
35
196
0
27 Feb 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
96
0
27 Feb 2018
Is Generator Conditioning Causally Related to GAN Performance?
Is Generator Conditioning Causally Related to GAN Performance?
Augustus Odena
Jacob Buckman
Catherine Olsson
Tom B. Brown
C. Olah
Colin Raffel
Ian Goodfellow
AI4CE
35
112
0
23 Feb 2018
Gradient Regularization Improves Accuracy of Discriminative Models
Gradient Regularization Improves Accuracy of Discriminative Models
D. Varga
Adrián Csiszárik
Zsolt Zombori
18
53
0
28 Dec 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,113
0
04 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,186
0
30 Nov 2014
Previous
12