ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06114
110
807
v1v2v3v4 (latest)

Multi-task Sequence to Sequence Learning

19 November 2015
Minh-Thang Luong
Quoc V. Le
Ilya Sutskever
Oriol Vinyals
Lukasz Kaiser
    AIMat
ArXiv (abs)PDFHTML
Abstract

Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three settings to multi-task sequence to sequence learning: (a) the one-to-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Additionally, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the context of multi-task learning: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.

View on arXiv
Comments on this paper