Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.00885
Cited By
v1
v2
v3
v4
v5 (latest)
Essentially No Barriers in Neural Network Energy Landscape
2 March 2018
Felix Dräxler
K. Veschgini
M. Salmhofer
Fred Hamprecht
MoMe
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Essentially No Barriers in Neural Network Energy Landscape"
50 / 295 papers shown
Title
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
105
88
0
20 Feb 2021
When Are Solutions Connected in Deep Networks?
Quynh N. Nguyen
Pierre Bréchet
Marco Mondelli
80
10
0
18 Feb 2021
Topological obstructions in neural networks learning
S. Barannikov
Daria Voronkova
I. Trofimov
Alexander Korotin
Grigorii Sotnikov
Evgeny Burnaev
39
6
0
31 Dec 2020
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
132
51
0
20 Dec 2020
Combating Mode Collapse in GAN training: An Empirical Analysis using Hessian Eigenvalues
Ricard Durall
Avraam Chatzimichailidis
P. Labus
J. Keuper
GAN
77
62
0
17 Dec 2020
Notes on Deep Learning Theory
Eugene Golikov
VLM
AI4CE
25
2
0
10 Dec 2020
GENNI: Visualising the Geometry of Equivalences for Neural Network Identifiability
Daniel Lengyel
Janith C. Petangoda
Isak Falk
Kate Highnam
Michalis Lazarou
A. Kolbeinsson
M. Deisenroth
N. Jennings
73
5
0
14 Nov 2020
Numerical Exploration of Training Loss Level-Sets in Deep Neural Networks
Naveed Tahir
Garrett E. Katz
41
0
0
09 Nov 2020
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Stanislav Fort
Gintare Karolina Dziugaite
Mansheej Paul
Sepideh Kharaghani
Daniel M. Roy
Surya Ganguli
116
193
0
28 Oct 2020
Linear Mode Connectivity in Multitask and Continual Learning
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Dilan Görür
Razvan Pascanu
H. Ghasemzadeh
CLL
88
147
0
09 Oct 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
78
94
0
07 Oct 2020
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Zhiyuan Li
Kaifeng Lyu
Sanjeev Arora
112
75
0
06 Oct 2020
Optimizing Mode Connectivity via Neuron Alignment
N. Joseph Tatro
Pin-Yu Chen
Payel Das
Igor Melnyk
P. Sattigeri
Rongjie Lai
MoMe
328
82
0
05 Sep 2020
FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning
Hong-You Chen
Wei-Lun Chao
FedML
101
262
0
04 Sep 2020
What is being transferred in transfer learning?
Behnam Neyshabur
Hanie Sedghi
Chiyuan Zhang
155
531
0
26 Aug 2020
Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry
Yossi Arjevani
M. Field
55
16
0
04 Aug 2020
Low-loss connection of weight vectors: distribution-based approaches
Ivan Anokhin
Dmitry Yarotsky
3DV
114
4
0
03 Aug 2020
Data-driven effective model shows a liquid-like deep learning
Wenxuan Zou
Haiping Huang
71
2
0
16 Jul 2020
From deep to Shallow: Equivalent Forms of Deep Networks in Reproducing Kernel Krein Space and Indefinite Support Vector Machines
A. Shilton
Sunil Gupta
Santu Rana
Svetha Venkatesh
31
0
0
15 Jul 2020
The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain
Xin Wang
34
1
0
07 Jul 2020
The Global Landscape of Neural Networks: An Overview
Ruoyu Sun
Dawei Li
Shiyu Liang
Tian Ding
R. Srikant
86
88
0
02 Jul 2020
Persistent Neurons
Yimeng Min
31
0
0
02 Jul 2020
Learn Faster and Forget Slower via Fast and Stable Task Adaptation
Farshid Varno
Lucas May Petry
Lisa Di-Jorio
Stan Matwin
CLL
64
2
0
02 Jul 2020
The Restricted Isometry of ReLU Networks: Generalization through Norm Concentration
Alex Goessmann
Gitta Kutyniok
30
3
0
01 Jul 2020
Dynamic of Stochastic Gradient Descent with State-Dependent Noise
Qi Meng
Shiqi Gong
Wei Chen
Zhi-Ming Ma
Tie-Yan Liu
62
16
0
24 Jun 2020
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
76
33
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
134
82
0
15 Jun 2020
Understanding Global Loss Landscape of One-hidden-layer ReLU Networks, Part 2: Experiments and Analysis
Bo Liu
15
1
0
15 Jun 2020
Beyond Random Matrix Theory for Deep Networks
Diego Granziol
123
16
0
13 Jun 2020
Is the Skip Connection Provable to Reform the Neural Network Loss Landscape?
Lifu Wang
Bo Shen
Ningrui Zhao
Zhiyuan Zhang
59
16
0
10 Jun 2020
Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling
Giulio Franzese
Rosa Candela
Dimitrios Milios
Maurizio Filippone
Pietro Michiardi
20
1
0
09 Jun 2020
Escaping Saddle Points Efficiently with Occupation-Time-Adapted Perturbations
Xin Guo
Jiequn Han
Mahan Tajrobehkar
Wenpin Tang
42
2
0
09 May 2020
The critical locus of overparameterized neural networks
Y. Cooper
UQCV
68
10
0
08 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
158
192
0
30 Apr 2020
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
66
12
0
30 Apr 2020
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
77
121
0
26 Apr 2020
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Zohar Ringel
116
23
0
02 Apr 2020
Towards Deep Learning Models Resistant to Large Perturbations
Amirreza Shaeiri
Rozhin Nobahari
M. Rohban
OOD
AAML
81
12
0
30 Mar 2020
SuperNet -- An efficient method of neural networks ensembling
Ludwik Bukowski
W. Dzwinel
21
2
0
29 Mar 2020
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
91
30
0
27 Mar 2020
Interference and Generalization in Temporal Difference Learning
Emmanuel Bengio
Joelle Pineau
Doina Precup
85
61
0
13 Mar 2020
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule
Nikhil Iyer
V. Thejas
Nipun Kwatra
Ramachandran Ramjee
Muthian Sivathanu
89
29
0
09 Mar 2020
Some Geometrical and Topological Properties of DNNs' Decision Boundaries
Bo Liu
Mengya Shen
AAML
78
3
0
07 Mar 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
190
658
0
20 Feb 2020
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Lingkai Kong
Molei Tao
57
23
0
14 Feb 2020
Understanding Global Loss Landscape of One-hidden-layer ReLU Networks, Part 1: Theory
Bo Liu
FAtt
MLT
60
1
0
12 Feb 2020
A study of local optima for learning feature interactions using neural networks
Yangzi Guo
Adrian Barbu
115
1
0
11 Feb 2020
SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Sungho Shin
Yoonho Boo
Wonyong Sung
MQ
44
3
0
02 Feb 2020
Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks
Aleksandr Shevchenko
Marco Mondelli
196
38
0
20 Dec 2019
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
137
169
0
19 Dec 2019
Previous
1
2
3
4
5
6
Next