This paper introduces novel alternate training procedures for hard-parameter sharing Multi-Task Neural Networks (MTNNs). Traditional MTNN training faces challenges in managing conflicting loss gradients, often yielding sub-optimal performance. The proposed alternate training method updates shared and task-specific weights alternately through the epochs, exploiting the multi-head architecture of the model. This approach reduces computational costs per epoch and memory requirements. Convergence properties similar to those of the classical stochastic gradient method are established. Empirical experiments demonstrate enhanced training regularization and reduced computational demands. In summary, our alternate training procedures offer a promising advancement for the training of hard-parameter sharing MTNNs.
View on arXiv@article{bellavia2025_2312.16340, title={ ATE-SG: Alternate Through the Epochs Stochastic Gradient for Multi-Task Neural Networks }, author={ Stefania Bellavia and Francesco Della Santa and Alessandra Papini }, journal={arXiv preprint arXiv:2312.16340}, year={ 2025 } }