77
0

Analyzing the Role of Permutation Invariance in Linear Mode Connectivity

Abstract

It was empirically observed in Entezari et al. (2021) that when accounting for the permutation invariance of neural networks, there is likely no loss barrier along the linear interpolation between two SGD solutions -- a phenomenon known as linear mode connectivity (LMC) modulo permutation. This phenomenon has sparked significant attention due to both its theoretical interest and practical relevance in applications such as model merging. In this paper, we provide a fine-grained analysis of this phenomenon for two-layer ReLU networks under a teacher-student setup. We show that as the student network width mm increases, the LMC loss barrier modulo permutation exhibits a double descent behavior. Particularly, when mm is sufficiently large, the barrier decreases to zero at a rate O(m1/2)O(m^{-1/2}). Notably, this rate does not suffer from the curse of dimensionality and demonstrates how substantial permutation can reduce the LMC loss barrier. Moreover, we observe a sharp transition in the sparsity of GD/SGD solutions when increasing the learning rate and investigate how this sparsity preference affects the LMC loss barrier modulo permutation. Experiments on both synthetic and MNIST datasets corroborate our theoretical predictions and reveal a similar trend for more complex network architectures.

View on arXiv
@article{zhan2025_2503.06001,
  title={ Analyzing the Role of Permutation Invariance in Linear Mode Connectivity },
  author={ Keyao Zhan and Puheng Li and Lei Wu },
  journal={arXiv preprint arXiv:2503.06001},
  year={ 2025 }
}
Comments on this paper