ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04724
  4. Cited By
Large Scale Structure of Neural Network Loss Landscapes

Large Scale Structure of Neural Network Loss Landscapes

11 June 2019
Stanislav Fort
Stanislaw Jastrzebski
ArXivPDFHTML

Papers citing "Large Scale Structure of Neural Network Loss Landscapes"

30 / 30 papers shown
Title
Analyzing the Role of Permutation Invariance in Linear Mode Connectivity
Keyao Zhan
Puheng Li
Lei Wu
MoMe
87
0
0
13 Mar 2025
CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
Kaizheng Wang
Keivan K1 Shariatmadar
Shireen Kudukkil Manchingal
Fabio Cuzzolin
David Moens
Hans Hallez
UQCV
BDL
95
12
0
28 Jan 2025
Reinforcement Teaching
Reinforcement Teaching
Alex Lewandowski
Calarina Muslimani
Dale Schuurmans
Matthew E. Taylor
Jun Luo
87
1
0
28 Jan 2025
Input Space Mode Connectivity in Deep Neural Networks
Input Space Mode Connectivity in Deep Neural Networks
Jakub Vrabel
Ori Shem-Ur
Yaron Oz
David Krueger
63
1
0
09 Sep 2024
Towards Scalable and Versatile Weight Space Learning
Towards Scalable and Versatile Weight Space Learning
Konstantin Schurholt
Michael W. Mahoney
Damian Borth
55
16
0
14 Jun 2024
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude
  Pruning
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
42
2
0
24 May 2023
A survey of deep learning optimizers -- first and second order methods
A survey of deep learning optimizers -- first and second order methods
Rohan Kashyap
ODL
47
6
0
28 Nov 2022
Multiple Modes for Continual Learning
Multiple Modes for Continual Learning
Siddhartha Datta
N. Shadbolt
CLL
MoMe
45
2
0
29 Sep 2022
A Closer Look at Learned Optimization: Stability, Robustness, and
  Inductive Biases
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
James Harrison
Luke Metz
Jascha Narain Sohl-Dickstein
49
22
0
22 Sep 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
244
45
0
24 May 2022
Interpolating Compressed Parameter Subspaces
Interpolating Compressed Parameter Subspaces
Siddhartha Datta
N. Shadbolt
37
5
0
19 May 2022
Deep Architecture Connectivity Matters for Its Convergence: A
  Fine-Grained Analysis
Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis
Wuyang Chen
Wei Huang
Xinyu Gong
Boris Hanin
Zhangyang Wang
40
7
0
11 May 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent
  Backdoor Attacks
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
When Do Flat Minima Optimizers Work?
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
28
58
0
01 Feb 2022
Towards Better Plasticity-Stability Trade-off in Incremental Learning: A
  Simple Linear Connector
Towards Better Plasticity-Stability Trade-off in Incremental Learning: A Simple Linear Connector
Guoliang Lin
Hanlu Chu
Hanjiang Lai
MoMe
CLL
39
45
0
15 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
39
217
0
12 Oct 2021
Connecting Low-Loss Subspace for Personalized Federated Learning
Connecting Low-Loss Subspace for Personalized Federated Learning
S. Hahn
Minwoo Jeong
Junghye Lee
FedML
24
18
0
16 Sep 2021
What can linear interpolation of neural network loss landscapes tell us?
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
30
27
0
30 Jun 2021
Extracting Global Dynamics of Loss Landscape in Deep Learning Models
Extracting Global Dynamics of Loss Landscape in Deep Learning Models
Mohammed Eslami
Hamed Eramian
Marcio Gameiro
W. Kalies
Konstantin Mischaikow
23
1
0
14 Jun 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
38
124
0
14 May 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
172
28
0
22 Apr 2021
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton
Wesley J. Maddox
Sanae Lotfi
A. Wilson
UQCV
33
67
0
25 Feb 2021
Learning Neural Network Subspaces
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
27
85
0
20 Feb 2021
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
24
642
0
20 Feb 2020
Deep Ensembles: A Loss Landscape Perspective
Deep Ensembles: A Loss Landscape Perspective
Stanislav Fort
Huiyi Hu
Balaji Lakshminarayanan
OOD
UQCV
41
620
0
05 Dec 2019
Emergent properties of the local geometry of neural loss landscapes
Emergent properties of the local geometry of neural loss landscapes
Stanislav Fort
Surya Ganguli
19
50
0
14 Oct 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
27
94
0
28 Jan 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
186
1,186
0
30 Nov 2014
1