ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.06671
  4. Cited By
Wasserstein Barycenter-based Model Fusion and Linear Mode Connectivity
  of Neural Networks

Wasserstein Barycenter-based Model Fusion and Linear Mode Connectivity of Neural Networks

13 October 2022
A. K. Akash
Sixu Li
Nicolas García Trillos
ArXivPDFHTML

Papers citing "Wasserstein Barycenter-based Model Fusion and Linear Mode Connectivity of Neural Networks"

11 / 11 papers shown
Title
A Lipschitz spaces view of infinitely wide shallow neural networks
A Lipschitz spaces view of infinitely wide shallow neural networks
Francesca Bartolucci
Marcello Carioni
José A. Iglesias
Yury Korolev
Emanuele Naldi
S. Vigogna
23
0
0
18 Oct 2024
Foldable SuperNets: Scalable Merging of Transformers with Different
  Initializations and Tasks
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Edan Kinderman
Itay Hubara
Haggai Maron
Daniel Soudry
MoMe
49
1
0
02 Oct 2024
Simultaneous linear connectivity of neural networks modulo permutation
Simultaneous linear connectivity of neural networks modulo permutation
Ekansh Sharma
Devin Kwok
Tom Denton
Daniel M. Roy
David Rolnick
Gintare Karolina Dziugaite
197
7
0
09 Apr 2024
FedFisher: Leveraging Fisher Information for One-Shot Federated Learning
FedFisher: Leveraging Fisher Information for One-Shot Federated Learning
Divyansh Jhunjhunwala
Shiqiang Wang
Gauri Joshi
FedML
28
6
0
19 Mar 2024
Proving Linear Mode Connectivity of Neural Networks via Optimal
  Transport
Proving Linear Mode Connectivity of Neural Networks via Optimal Transport
Damien Ferbach
Baptiste Goujaud
Gauthier Gidel
Aymeric Dieuleveut
MoMe
21
16
0
29 Oct 2023
Equivariant Deep Weight Space Alignment
Equivariant Deep Weight Space Alignment
Aviv Navon
Aviv Shamsian
Ethan Fetaya
Gal Chechik
Nadav Dym
Haggai Maron
29
21
0
20 Oct 2023
Transformer Fusion with Optimal Transport
Transformer Fusion with Optimal Transport
Moritz Imfeld
Jacopo Graldi
Marco Giordano
Thomas Hofmann
Sotiris Anagnostidis
Sidak Pal Singh
ViT
MoMe
24
16
0
09 Oct 2023
Deep Model Fusion: A Survey
Deep Model Fusion: A Survey
Weishi Li
Yong Peng
Miao Zhang
Liang Ding
Han Hu
Li Shen
FedML
MoMe
33
52
0
27 Sep 2023
Re-basin via implicit Sinkhorn differentiation
Re-basin via implicit Sinkhorn differentiation
F. Guerrero-Peña
H. R. Medeiros
Thomas Dubail
Masih Aminbeidokhti
Eric Granger
M. Pedersoli
MoMe
20
44
0
22 Dec 2022
Git Re-Basin: Merging Models modulo Permutation Symmetries
Git Re-Basin: Merging Models modulo Permutation Symmetries
Samuel K. Ainsworth
J. Hayase
S. Srinivasa
MoMe
255
314
0
11 Sep 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,889
0
15 Sep 2016
1