ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00290
23
0

DLM-One: Diffusion Language Models for One-Step Sequence Generation

30 May 2025
Tianqi Chen
Shujian Zhang
Mingyuan Zhou
ArXiv (abs)PDFHTML
Main:12 Pages
4 Figures
Bibliography:6 Pages
9 Tables
Appendix:5 Pages
Abstract

This paper introduces DLM-One, a score-distillation-based framework for one-step sequence generation with continuous diffusion language models (DLMs). DLM-One eliminates the need for iterative refinement by aligning the scores of a student model's outputs in the continuous token embedding space with the score function of a pretrained teacher DLM. We investigate whether DLM-One can achieve substantial gains in sampling efficiency for language modeling. Through comprehensive experiments on DiffuSeq -- a representative continuous DLM -- we show that DLM-One achieves up to ~500x speedup in inference time while maintaining competitive performance on benchmark text generation tasks used to evaluate the teacher models. We further analyze the method's empirical behavior across multiple datasets, providing initial insights into its generality and practical applicability. Our findings position one-step diffusion as a promising direction for efficient, high-quality language generation and broader adoption of continuous diffusion models operating in embedding space for natural language processing.

View on arXiv
@article{chen2025_2506.00290,
  title={ DLM-One: Diffusion Language Models for One-Step Sequence Generation },
  author={ Tianqi Chen and Shujian Zhang and Mingyuan Zhou },
  journal={arXiv preprint arXiv:2506.00290},
  year={ 2025 }
}
Comments on this paper