52
0

NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling

Main:9 Pages
10 Figures
Bibliography:6 Pages
19 Tables
Appendix:12 Pages
Abstract

Model ensembles have long been a cornerstone for improving generalization and robustness in deep learning. However, their effectiveness often comes at the cost of substantial computational overhead. To address this issue, state-of-the-art methods aim to replicate ensemble-class performance without requiring multiple independently trained networks. Unfortunately, these algorithms often still demand considerable compute at inference. In response to these limitations, we introduce NeuroTrails\textbf{NeuroTrails}, a sparse multi-head architecture with dynamically evolving topology. This unexplored model-agnostic training paradigm improves ensemble performance while reducing the required resources. We analyze the underlying reason for its effectiveness and observe that the various neural trails induced by dynamic sparsity attain a Goldilocks zone\textit{Goldilocks zone} of prediction diversity. NeuroTrails displays efficacy with convolutional and transformer-based architectures on computer vision and language tasks. Experiments on ResNet-50/ImageNet, LLaMA-350M/C4, among many others, demonstrate increased accuracy and stronger robustness in zero-shot generalization, while requiring significantly fewer parameters.

View on arXiv
@article{grooten2025_2505.17909,
  title={ NeuroTrails: Training with Dynamic Sparse Heads as the Key to Effective Ensembling },
  author={ Bram Grooten and Farid Hasanov and Chenxiang Zhang and Qiao Xiao and Boqian Wu and Zahra Atashgahi and Ghada Sokar and Shiwei Liu and Lu Yin and Elena Mocanu and Mykola Pechenizkiy and Decebal Constantin Mocanu },
  journal={arXiv preprint arXiv:2505.17909},
  year={ 2025 }
}
Comments on this paper