ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.06069
171
1108
v1v2v3 (latest)

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

19 May 2016
Iulian Serban
Alessandro Sordoni
Ryan J. Lowe
Laurent Charlin
Joelle Pineau
Aaron Courville
Yoshua Bengio
    BDL
ArXiv (abs)PDFHTML
Abstract

Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.

View on arXiv
Comments on this paper