ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06114
110
807
v1v2v3v4 (latest)

Multi-task Sequence to Sequence Learning

19 November 2015
Minh-Thang Luong
Quoc V. Le
Ilya Sutskever
Oriol Vinyals
Lukasz Kaiser
    AIMat
ArXiv (abs)PDFHTML
Abstract

Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three settings to multi-task sequence to sequence learning: (a) the one-to-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on parsing and image caption generation improves translation accuracy and vice versa. We also present novel findings on the benefit of the different unsupervised learning objectives: we found that the skip-thought objective is beneficial to translation while the sequence autoencoder objective is not.

View on arXiv
Comments on this paper