ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.18400
39
2

Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass

28 May 2024
Ethan Shen
Alan Fan
Sarah M Pratt
Jae Sung Park
Matthew Wallingford
Sham Kakade
Ari Holtzman
Ranjay Krishna
Ali Farhadi
Aditya Kusupati
ArXivPDFHTML
Abstract

Many applications today provide users with multiple auto-complete drafts as they type, including GitHub's code completion, Gmail's smart compose, and Apple's messaging auto-suggestions. Under the hood, language models support this by running an autoregressive inference pass to provide a draft. Consequently, providing kkk drafts to the user requires running an expensive language model kkk times. To alleviate the computation cost of running kkk inference passes, we propose Superposed Decoding, a new decoding algorithm that generates kkk drafts at the computation cost of one autoregressive inference pass. We achieve this by feeding a superposition of the most recent token embeddings from the kkk drafts as input to the next decoding step of the language model. At every inference step we combine the kkk drafts with the top-kkk tokens to get k2k^2k2 new drafts and cache the kkk most likely options, using an n-gram interpolation with minimal compute overhead to filter out incoherent generations. Our experiments show that kkk drafts from Superposed Decoding are at least as coherent and factual as Nucleus Sampling and Greedy Decoding respectively, while being at least 2.44×2.44\times2.44× faster for k≥3k\ge3k≥3. In a compute-normalized setting, user evaluations demonstrably favor text generated by Superposed Decoding over Nucleus Sampling. Code and more examples open-sourced at https://github.com/RAIVNLab/SuperposedDecoding.

View on arXiv
Comments on this paper