ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.16340
33
0

ATE-SG: Alternate Through the Epochs Stochastic Gradient for Multi-Task Neural Networks

26 December 2023
Stefania Bellavia
Francesco Della Santa
Alessandra Papini
ArXivPDFHTML
Abstract

This paper introduces novel alternate training procedures for hard-parameter sharing Multi-Task Neural Networks (MTNNs). Traditional MTNN training faces challenges in managing conflicting loss gradients, often yielding sub-optimal performance. The proposed alternate training method updates shared and task-specific weights alternately through the epochs, exploiting the multi-head architecture of the model. This approach reduces computational costs per epoch and memory requirements. Convergence properties similar to those of the classical stochastic gradient method are established. Empirical experiments demonstrate enhanced training regularization and reduced computational demands. In summary, our alternate training procedures offer a promising advancement for the training of hard-parameter sharing MTNNs.

View on arXiv
@article{bellavia2025_2312.16340,
  title={ ATE-SG: Alternate Through the Epochs Stochastic Gradient for Multi-Task Neural Networks },
  author={ Stefania Bellavia and Francesco Della Santa and Alessandra Papini },
  journal={arXiv preprint arXiv:2312.16340},
  year={ 2025 }
}
Comments on this paper