62
1

Connecting Neural Models Latent Geometries with Relative Geodesic Representations

Main:9 Pages
17 Figures
Bibliography:4 Pages
11 Tables
Appendix:11 Pages
Abstract

Neural models learn representations of high-dimensional data on low-dimensional manifolds. Multiple factors, including stochasticities in the training process, model architectures, and additional inductive biases, may induce different representations, even when learning the same task on the same data. However, it has recently been shown that when a latent structure is shared between distinct latent spaces, relative distances between representations can be preserved, up to distortions. Building on this idea, we demonstrate that exploiting the differential-geometric structure of latent spaces of neural models, it is possible to capture precisely the transformations between representational spaces trained on similar data distributions. Specifically, we assume that distinct neural models parametrize approximately the same underlying manifold, and introduce a representation based on the pullback metric that captures the intrinsic structure of the latent space, while scaling efficiently to large models. We validate experimentally our method on model stitching and retrieval tasks, covering autoencoders and vision foundation discriminative models, across diverse architectures, datasets, and pretraining schemes.

View on arXiv
@article{yu2025_2506.01599,
  title={ Connecting Neural Models Latent Geometries with Relative Geodesic Representations },
  author={ Hanlin Yu and Berfin Inal and Georgios Arvanitidis and Soren Hauberg and Francesco Locatello and Marco Fumero },
  journal={arXiv preprint arXiv:2506.01599},
  year={ 2025 }
}
Comments on this paper