Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.00885
Cited By
v1
v2
v3
v4
v5 (latest)
Essentially No Barriers in Neural Network Energy Landscape
2 March 2018
Felix Dräxler
K. Veschgini
M. Salmhofer
Fred Hamprecht
MoMe
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Essentially No Barriers in Neural Network Energy Landscape"
50 / 295 papers shown
Title
How Tempering Fixes Data Augmentation in Bayesian Neural Networks
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
BDL
AAML
128
9
0
27 May 2022
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
333
48
0
24 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
80
25
0
24 May 2022
Interpolating Compressed Parameter Subspaces
Siddhartha Datta
N. Shadbolt
96
5
0
19 May 2022
Diverse Weight Averaging for Out-of-Distribution Generalization
Alexandre Ramé
Matthieu Kirchmeyer
Thibaud Rahier
A. Rakotomamonjy
Patrick Gallinari
Matthieu Cord
OOD
258
138
0
19 May 2022
FuNNscope: Visual microscope for interactively exploring the loss landscape of fully connected neural networks
Aleksandar Doknic
Torsten Moller
102
2
0
09 Apr 2022
Improving Generalization in Federated Learning by Seeking Flat Minima
Debora Caldarola
Barbara Caputo
Marco Ciccone
FedML
101
112
0
22 Mar 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
72
2
0
21 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
79
6
0
07 Mar 2022
Continual Learning Beyond a Single Model
T. Doan
Seyed Iman Mirzadeh
Mehrdad Farajtabar
CLL
92
16
0
20 Feb 2022
Geometric Regularization from Overparameterization
Nicholas J. Teague
55
1
0
18 Feb 2022
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry
Fabrizio Pittorino
Antonio Ferraro
Gabriele Perugini
Christoph Feinauer
Carlo Baldassi
R. Zecchina
265
26
0
07 Feb 2022
Anticorrelated Noise Injection for Improved Generalization
Antonio Orvieto
Hans Kersting
F. Proske
Francis R. Bach
Aurelien Lucchi
122
48
0
06 Feb 2022
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
142
65
0
01 Feb 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization
G. Luca
E. Silverstein
85
11
0
26 Jan 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
106
38
0
20 Jan 2022
Generalization in Supervised Learning Through Riemannian Contraction
L. Kozachkov
Patrick M. Wensing
Jean-Jacques E. Slotine
MLT
93
9
0
17 Jan 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
77
0
0
03 Jan 2022
LossPlot: A Better Way to Visualize Loss Landscapes
Robert Bain
M. Tokarev
Harsh R Kothari
Rahul Damineni
66
5
0
30 Nov 2021
Mode connectivity in the loss landscape of parameterized quantum circuits
Kathleen E. Hamilton
E. Lynn
R. Pooser
72
3
0
09 Nov 2021
Exponential escape efficiency of SGD from sharp minima in non-stationary regime
Hikaru Ibayashi
Masaaki Imaizumi
97
4
0
07 Nov 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
Aleksandr Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
102
13
0
03 Nov 2021
Towards Better Plasticity-Stability Trade-off in Incremental Learning: A Simple Linear Connector
Guoliang Lin
Hanlu Chu
Hanjiang Lai
MoMe
CLL
92
50
0
15 Oct 2021
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Zhiyuan Li
Tianhao Wang
Sanjeev Arora
MLT
121
105
0
13 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
106
238
0
12 Oct 2021
Tighter Sparse Approximation Bounds for ReLU Neural Networks
Carles Domingo-Enrich
Youssef Mroueh
143
4
0
07 Oct 2021
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective
Luca Scimeca
Seong Joon Oh
Sanghyuk Chun
Michael Poli
Sangdoo Yun
OOD
603
54
0
06 Oct 2021
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
101
27
0
01 Oct 2021
Connecting Low-Loss Subspace for Personalized Federated Learning
S. Hahn
Minwoo Jeong
Junghye Lee
FedML
94
19
0
16 Sep 2021
Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility
Liyun Tu
Austin Talbot
Neil Gallagher
David Carlson
DRL
76
3
0
09 Sep 2021
Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training
Max Mutschler
Kevin Laube
A. Zell
ODL
44
1
0
31 Aug 2021
Shift-Curvature, SGD, and Generalization
Arwen V. Bradley
C. Gomez-Uribe
Manish Reddy Vuyyuru
62
3
0
21 Aug 2021
Taxonomizing local versus global structure in neural network loss landscapes
Yaoqing Yang
Liam Hodgkinson
Ryan Theisen
Joe Zou
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
121
37
0
23 Jul 2021
Structured Directional Pruning via Perturbation Orthogonal Projection
YinchuanLi
XiaofengLiu
YunfengShao
QingWang
YanhuiGeng
48
2
0
12 Jul 2021
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
76
28
0
30 Jun 2021
Revisiting Model Stitching to Compare Neural Representations
Yamini Bansal
Preetum Nakkiran
Boaz Barak
FedML
120
121
0
14 Jun 2021
Quantifying and Localizing Usable Information Leakage from Neural Network Gradients
Fan Mo
Anastasia Borovykh
Mohammad Malekzadeh
Soteris Demetriou
Deniz Gündüz
Hamed Haddadi
FedML
31
3
0
28 May 2021
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Berfin cSimcsek
François Ged
Arthur Jacot
Francesco Spadaro
Clément Hongler
W. Gerstner
Johanni Brea
AI4CE
87
102
0
25 May 2021
Advances in Multi-Variate Analysis Methods for New Physics Searches at the Large Hadron Collider
A. Stakia
T. Dorigo
G. Banelli
D. Bortoletto
A. Casa
...
G. Strong
C. Tosciri
J. Varela
Pietro Vischia
A. Weiler
38
3
0
16 May 2021
What Are Bayesian Neural Network Posteriors Really Like?
Pavel Izmailov
Sharad Vikram
Matthew D. Hoffman
A. Wilson
UQCV
BDL
81
389
0
29 Apr 2021
Policy Manifold Search: Exploring the Manifold Hypothesis for Diversity-based Neuroevolution
Nemanja Rakićević
Antoine Cully
Petar Kormushev
54
34
0
27 Apr 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
242
28
0
22 Apr 2021
MetricOpt: Learning to Optimize Black-Box Evaluation Metrics
Chen Huang
Shuangfei Zhai
Pengsheng Guo
J. Susskind
97
12
0
21 Apr 2021
Rehearsal revealed: The limits and merits of revisiting samples in continual learning
Eli Verwimp
Matthias De Lange
Tinne Tuytelaars
CLL
62
108
0
15 Apr 2021
Training Deep Neural Networks via Branch-and-Bound
Yuanwei Wu
Ziming Zhang
Guanghui Wang
ODL
57
0
0
05 Apr 2021
Empirically explaining SGD from a line search perspective
Max Mutschler
A. Zell
ODL
LRM
71
4
0
31 Mar 2021
GridDehazeNet+: An Enhanced Multi-Scale Network with Intra-Task Knowledge Transfer for Single Image Dehazing
Xiaohong Liu
Zhihao Shi
Zijun Wu
Jun Chen
65
28
0
25 Mar 2021
Spurious Local Minima Are Common for Deep Neural Networks with Piecewise Linear Activations
Bo Liu
53
7
0
25 Feb 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton
Wesley J. Maddox
Sanae Lotfi
A. Wilson
UQCV
129
70
0
25 Feb 2021
Noisy Gradient Descent Converges to Flat Minima for Nonconvex Matrix Factorization
Tianyi Liu
Yan Li
S. Wei
Enlu Zhou
T. Zhao
65
13
0
24 Feb 2021
Previous
1
2
3
4
5
6
Next