7
0

ResNets Are Deeper Than You Think

Main:9 Pages
11 Figures
Bibliography:4 Pages
Appendix:11 Pages
Abstract

Residual connections remain ubiquitous in modern neural network architectures nearly a decade after their introduction. Their widespread adoption is often credited to their dramatically improved trainability: residual networks train faster, more stably, and achieve higher accuracy than their feedforward counterparts. While numerous techniques, ranging from improved initialization to advanced learning rate schedules, have been proposed to close the performance gap between residual and feedforward networks, this gap has persisted. In this work, we propose an alternative explanation: residual networks do not merely reparameterize feedforward networks, but instead inhabit a different function space. We design a controlled post-training comparison to isolate generalization performance from trainability; we find that variable-depth architectures, similar to ResNets, consistently outperform fixed-depth networks, even when optimization is unlikely to make a difference. These results suggest that residual connections confer performance advantages beyond optimization, pointing instead to a deeper inductive bias aligned with the structure of natural data.

View on arXiv
@article{mehmeti-göpel2025_2506.14386,
  title={ ResNets Are Deeper Than You Think },
  author={ Christian H.X. Ali Mehmeti-Göpel and Michael Wand },
  journal={arXiv preprint arXiv:2506.14386},
  year={ 2025 }
}
Comments on this paper