ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10347
40
0

Uniform Loss vs. Specialized Optimization: A Comparative Analysis in Multi-Task Learning

15 May 2025
Gabriel S. Gama
Valdir Grassi Jr
    MoMe
ArXivPDFHTML
Abstract

Specialized Multi-Task Optimizers (SMTOs) balance task learning in Multi-Task Learning by addressing issues like conflicting gradients and differing gradient norms, which hinder equal-weighted task training. However, recent critiques suggest that equally weighted tasks can achieve competitive results compared to SMTOs, arguing that previous SMTO results were influenced by poor hyperparameter optimization and lack of regularization. In this work, we evaluate these claims through an extensive empirical evaluation of SMTOs, including some of the latest methods, on more complex multi-task problems to clarify this behavior. Our findings indicate that SMTOs perform well compared to uniform loss and that fixed weights can achieve competitive performance compared to SMTOs. Furthermore, we demonstrate why uniform loss perform similarly to SMTOs in some instances. The code will be made publicly available.

View on arXiv
@article{gama2025_2505.10347,
  title={ Uniform Loss vs. Specialized Optimization: A Comparative Analysis in Multi-Task Learning },
  author={ Gabriel S. Gama and Valdir Grassi Jr },
  journal={arXiv preprint arXiv:2505.10347},
  year={ 2025 }
}
Comments on this paper