ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08403
  4. Cited By
REPAIR: REnormalizing Permuted Activations for Interpolation Repair
v1v2v3 (latest)

REPAIR: REnormalizing Permuted Activations for Interpolation Repair

15 November 2022
Keller Jordan
Hanie Sedghi
O. Saukh
R. Entezari
Behnam Neyshabur
    MoMe
ArXiv (abs)PDFHTML

Papers citing "REPAIR: REnormalizing Permuted Activations for Interpolation Repair"

50 / 54 papers shown
Title
Understanding Mode Connectivity via Parameter Space Symmetry
Understanding Mode Connectivity via Parameter Space Symmetry
B. Zhao
Nima Dehmamy
Robin Walters
Rose Yu
225
8
0
29 May 2025
BECAME: BayEsian Continual Learning with Adaptive Model MErging
BECAME: BayEsian Continual Learning with Adaptive Model MErging
Mei Li
Yuxiang Lu
Qinyan Dai
Suizhi Huang
Yue Ding
Hongtao Lu
CLLMoMe
120
0
0
03 Apr 2025
Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large Language Models
Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large Language Models
Shuqi Liu
Han Wu
Bowei He
Xiongwei Han
Mingxuan Yuan
Linqi Song
MoMe
119
3
0
20 Feb 2025
Linear Mode Connectivity in Differentiable Tree Ensembles
Linear Mode Connectivity in Differentiable Tree Ensembles
Ryuichi Kanoh
M. Sugiyama
205
1
0
17 Feb 2025
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
135
1
0
17 Feb 2025
Multi-Task Model Merging via Adaptive Weight Disentanglement
Multi-Task Model Merging via Adaptive Weight Disentanglement
Feng Xiong
Runxi Cheng
Wang Chen
Zhanqiu Zhang
Yiwen Guo
Chun Yuan
Ruifeng Xu
MoMe
195
8
0
10 Jan 2025
Training-free Heterogeneous Model Merging
Zhengqi Xu
Han Zheng
Jie Song
Li Sun
Mingli Song
MoMe
232
1
0
03 Jan 2025
Task Singular Vectors: Reducing Task Interference in Model Merging
Task Singular Vectors: Reducing Task Interference in Model Merging
Antonio Andrea Gargiulo
Donato Crisostomi
Maria Sofia Bucarelli
Simone Scardapane
Fabrizio Silvestri
Emanuele Rodolà
MoMe
147
16
0
26 Nov 2024
ATM: Improving Model Merging by Alternating Tuning and Merging
ATM: Improving Model Merging by Alternating Tuning and Merging
Luca Zhou
Daniele Solombrino
Donato Crisostomi
Maria Sofia Bucarelli
Fabrizio Silvestri
Emanuele Rodolà
MoMe
119
5
0
05 Nov 2024
PLeaS -- Merging Models with Permutations and Least Squares
PLeaS -- Merging Models with Permutations and Least Squares
Anshul Nasery
J. Hayase
Pang Wei Koh
Sewoong Oh
MoMe
108
4
0
02 Jul 2024
Neural Networks Trained by Weight Permutation are Universal Approximators
Neural Networks Trained by Weight Permutation are Universal Approximators
Yongqiang Cai
Gaohang Chen
Zhonghua Qiao
179
1
0
01 Jul 2024
Arcee's MergeKit: A Toolkit for Merging Large Language Models
Arcee's MergeKit: A Toolkit for Merging Large Language Models
Charles Goddard
Shamane Siriwardhana
Malikeh Ehghaghi
Luke Meyers
Vladimir Karpukhin
Brian Benedict
Mark McQuade
Jacob Solawetz
MoMeKELM
157
101
0
20 Mar 2024
MedMerge: Merging Models for Effective Transfer Learning to Medical Imaging Tasks
MedMerge: Merging Models for Effective Transfer Learning to Medical Imaging Tasks
Ibrahim Almakky
Santosh Sanjeev
Anees Ur Rehman Hashmi
Mohammad Areeb Qazi
Mohammad Yaqub
Mohammad Yaqub
FedMLMoMe
140
4
0
18 Mar 2024
Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods
Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods
Akira Ito
Masanori Yamada
Atsutoshi Kumagai
MoMe
136
6
0
06 Feb 2024
Git Re-Basin: Merging Models modulo Permutation Symmetries
Git Re-Basin: Merging Models modulo Permutation Symmetries
Samuel K. Ainsworth
J. Hayase
S. Srinivasa
MoMe
308
344
0
11 Sep 2022
Patching open-vocabulary models by interpolating weights
Patching open-vocabulary models by interpolating weights
Gabriel Ilharco
Mitchell Wortsman
S. Gadre
Shuran Song
Hannaneh Hajishirzi
Simon Kornblith
Ali Farhadi
Ludwig Schmidt
VLMKELM
109
176
0
10 Aug 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
290
48
0
24 May 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
161
1,011
1
10 Mar 2022
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of
  Flat Regions in the Landscape Geometry
Deep Networks on Toroids: Removing Symmetries Reveals the Structure of Flat Regions in the Landscape Geometry
Fabrizio Pittorino
Antonio Ferraro
Gabriele Perugini
Christoph Feinauer
Carlo Baldassi
R. Zecchina
254
26
0
07 Feb 2022
Stochastic Weight Averaging Revisited
Stochastic Weight Averaging Revisited
Hao Guo
Jiyong Jin
B. Liu
63
30
0
03 Jan 2022
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
98
238
0
12 Oct 2021
Robust fine-tuning of zero-shot models
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
158
739
0
04 Sep 2021
Geometry of the Loss Landscape in Overparameterized Neural Networks:
  Symmetries and Invariances
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Berfin cSimcsek
François Ged
Arthur Jacot
Francesco Spadaro
Clément Hongler
W. Gerstner
Johanni Brea
AI4CE
73
102
0
25 May 2021
Optimizing Mode Connectivity via Neuron Alignment
Optimizing Mode Connectivity via Neuron Alignment
N. Joseph Tatro
Pin-Yu Chen
Payel Das
Igor Melnyk
P. Sattigeri
Rongjie Lai
MoMe
301
82
0
05 Sep 2020
Entropic gradient descent algorithms and wide flat minima
Entropic gradient descent algorithms and wide flat minima
Fabrizio Pittorino
Carlo Lucibello
Christoph Feinauer
Gabriele Perugini
Carlo Baldassi
Elizaveta Demyanenko
R. Zecchina
ODLMLT
97
33
0
14 Jun 2020
Safe Crossover of Neural Networks Through Neuron Alignment
Safe Crossover of Neural Networks Through Neuron Alignment
Thomas Uriot
Dario Izzo
50
14
0
23 Mar 2020
Loss landscapes and optimization in over-parameterized non-linear
  systems and neural networks
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
Chaoyue Liu
Libin Zhu
M. Belkin
ODL
94
265
0
29 Feb 2020
Federated Learning with Matched Averaging
Federated Learning with Matched Averaging
Hongyi Wang
Mikhail Yurochkin
Yuekai Sun
Dimitris Papailiopoulos
Y. Khazaeni
FedML
124
1,129
0
15 Feb 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
163
630
0
11 Dec 2019
Deep Ensembles: A Loss Landscape Perspective
Deep Ensembles: A Loss Landscape Perspective
Stanislav Fort
Huiyi Hu
Balaji Lakshminarayanan
OODUQCV
125
629
0
05 Dec 2019
Model Fusion via Optimal Transport
Model Fusion via Optimal Transport
Sidak Pal Singh
Martin Jaggi
MoMeFedML
120
241
0
12 Oct 2019
Weight-space symmetry in deep networks gives rise to permutation
  saddles, connected by equal-loss valleys across the loss landscape
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Johanni Brea
Berfin Simsek
Bernd Illing
W. Gerstner
98
58
0
05 Jul 2019
Shaping the learning landscape in neural networks around wide flat
  minima
Shaping the learning landscape in neural networks around wide flat minima
Carlo Baldassi
Fabrizio Pittorino
R. Zecchina
MLT
73
84
0
20 May 2019
Uniform convergence may be unable to explain generalization in deep
  learning
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMeAI4CE
86
317
0
13 Feb 2019
Fixup Initialization: Residual Learning Without Normalization
Fixup Initialization: Residual Learning Without Normalization
Hongyi Zhang
Yann N. Dauphin
Tengyu Ma
ODLAI4CE
98
351
0
27 Jan 2019
On the loss landscape of a class of deep neural networks with no bad
  local valleys
On the loss landscape of a class of deep neural networks with no bad local valleys
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
88
87
0
27 Sep 2018
The jamming transition as a paradigm to understand the loss landscape of
  deep neural networks
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
Marco Baity-Jesi
Giulio Biroli
Matthieu Wyart
82
143
0
25 Sep 2018
Multi-Task Zipping via Layer-wise Neuron Sharing
Multi-Task Zipping via Layer-wise Neuron Sharing
Xiaoxi He
Zimu Zhou
Lothar Thiele
MoMe
47
62
0
24 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
105
862
0
18 Apr 2018
Averaging Weights Leads to Wider Optima and Better Generalization
Averaging Weights Leads to Wider Optima and Better Generalization
Pavel Izmailov
Dmitrii Podoprikhin
T. Garipov
Dmitry Vetrov
A. Wilson
FedMLMoMe
143
1,672
0
14 Mar 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
272
3,488
0
09 Mar 2018
Essentially No Barriers in Neural Network Energy Landscape
Essentially No Barriers in Neural Network Energy Landscape
Felix Dräxler
K. Veschgini
M. Salmhofer
Fred Hamprecht
MoMe
122
435
0
02 Mar 2018
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
T. Garipov
Pavel Izmailov
Dmitrii Podoprikhin
Dmitry Vetrov
A. Wilson
UQCV
89
758
0
27 Feb 2018
Visualizing the Loss Landscape of Neural Nets
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
264
1,901
0
28 Dec 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
285
8,928
0
25 Aug 2017
Exploring Generalization in Deep Learning
Exploring Generalization in Deep Learning
Behnam Neyshabur
Srinadh Bhojanapalli
David A. McAllester
Nathan Srebro
FAtt
162
1,259
0
27 Jun 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
351
4,636
0
10 Nov 2016
Topology and Geometry of Half-Rectified Network Optimization
Topology and Geometry of Half-Rectified Network Optimization
C. Freeman
Joan Bruna
222
235
0
04 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
433
2,946
0
15 Sep 2016
Layer Normalization
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
435
10,541
0
21 Jul 2016
12
Next