Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.16004
Cited By
What can linear interpolation of neural network loss landscapes tell us?
30 June 2021
Tiffany J. Vlaar
Jonathan Frankle
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What can linear interpolation of neural network loss landscapes tell us?"
24 / 24 papers shown
Title
Low-Loss Space in Neural Networks is Continuous and Fully Connected
Yongding Tian
Zaid Al-Ars
Maksim Kitsak
P. Hofstee
3DPC
28
0
0
05 May 2025
High-dimensional manifold of solutions in neural networks: insights from statistical physics
Enrico M. Malatesta
46
4
0
20 Feb 2025
In Search of the Successful Interpolation: On the Role of Sharpness in CLIP Generalization
Alireza Abdollahpoorrostam
23
0
0
21 Oct 2024
Weight Scope Alignment: A Frustratingly Easy Method for Model Merging
Yichu Xu
Xin-Chun Li
Le Gan
De-Chuan Zhan
MoMe
42
0
0
22 Aug 2024
The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Derek Lim
Moe Putterman
Robin Walters
Haggai Maron
Stefanie Jegelka
43
5
0
30 May 2024
Visualizing, Rethinking, and Mining the Loss Landscape of Deep Neural Networks
Xin-Chun Li
Lan Li
De-Chuan Zhan
33
2
0
21 May 2024
Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks
Xin-Chun Li
Jinli Tang
Bo Zhang
Lan Li
De-Chuan Zhan
49
2
0
21 May 2024
Simultaneous linear connectivity of neural networks modulo permutation
Ekansh Sharma
Devin Kwok
Tom Denton
Daniel M. Roy
David Rolnick
Gintare Karolina Dziugaite
197
7
0
09 Apr 2024
Exploring Neural Network Landscapes: Star-Shaped and Geodesic Connectivity
Zhanran Lin
Puheng Li
Lei Wu
168
7
0
09 Apr 2024
Improving Group Connectivity for Generalization of Federated Deep Learning
Zexi Li
Jie Lin
Zhiqi Li
Didi Zhu
Chao Wu
AI4CE
FedML
40
0
0
29 Feb 2024
Training-time Neuron Alignment through Permutation Subspace for Improving Linear Mode Connectivity and Model Fusion
Zexi Li
Zhiqi Li
Jie Lin
Tao Shen
Tao Lin
Chao Wu
41
4
0
02 Feb 2024
Disentangling Linear Mode-Connectivity
Gul Sena Altintas
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
28
6
0
15 Dec 2023
Proving Linear Mode Connectivity of Neural Networks via Optimal Transport
Damien Ferbach
Baptiste Goujaud
Gauthier Gidel
Aymeric Dieuleveut
MoMe
21
16
0
29 Oct 2023
Layer-wise Linear Mode Connectivity
Linara Adilova
Maksym Andriushchenko
Michael Kamp
Asja Fischer
Martin Jaggi
FedML
FAtt
MoMe
33
15
0
13 Jul 2023
Distilled Pruning: Using Synthetic Data to Win the Lottery
Luke McDermott
Daniel Cummings
SyDa
DD
34
1
0
07 Jul 2023
Transferring Learning Trajectories of Neural Networks
Daiki Chijiwa
28
2
0
23 May 2023
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
Dayal Singh Kalra
M. Barkeshli
15
9
0
23 Feb 2023
Revisiting Weighted Aggregation in Federated Learning with Neural Networks
Zexi Li
Tao R. Lin
Xinyi Shang
Chao-Xiang Wu
FedML
45
59
0
14 Feb 2023
Class Interference of Deep Neural Networks
Dongcui Diao
Hengshuai Yao
Bei Jiang
15
1
0
31 Oct 2022
Git Re-Basin: Merging Models modulo Permutation Symmetries
Samuel K. Ainsworth
J. Hayase
S. Srinivasa
MoMe
255
314
0
11 Sep 2022
FuNNscope: Visual microscope for interactively exploring the loss landscape of fully connected neural networks
Aleksandar Doknic
Torsten Moller
20
2
0
09 Apr 2022
Multirate Training of Neural Networks
Tiffany J. Vlaar
B. Leimkuhler
19
4
0
20 Jun 2021
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
158
28
0
22 Apr 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
287
2,890
0
15 Sep 2016
1