61
0

Non-Markovian Discrete Diffusion with Causal Language Models

Abstract

Discrete diffusion models offer a flexible, controllable approach to structured sequence generation, yet they still lag behind causal language models in expressive power. A key limitation lies in their reliance on the Markovian assumption, which restricts each step to condition only on the current state, leading to potential uncorrectable error accumulation. In this paper, we introduce CaDDi, a discrete diffusion model that conditions on the entire generative trajectory, thereby lifting the Markov constraint and allowing the model to revisit and improve past states. By unifying sequential (causal) and temporal (diffusion) reasoning in a single non-Markovian transformer, CaDDi also treats standard causal language models as a special case and permits the direct reuse of pretrained LLM weights with no architectural changes. Empirically, CaDDi outperforms state-of-the-art discrete diffusion baselines on natural-language benchmarks, substantially narrowing the remaining gap to large autoregressive transformers.

View on arXiv
@article{zhang2025_2502.09767,
  title={ Non-Markovian Discrete Diffusion with Causal Language Models },
  author={ Yangtian Zhang and Sizhuang He and Daniel Levine and Lawrence Zhao and David Zhang and Syed A Rizvi and Emanuele Zappala and Rex Ying and David van Dijk },
  journal={arXiv preprint arXiv:2502.09767},
  year={ 2025 }
}
Comments on this paper