ReDDiT: Rehashing Noise for Discrete Visual Generation

Discrete diffusion models are gaining traction in the visual generative area for their efficiency and compatibility. However, the pioneered attempts still fall behind the continuous counterparts, which we attribute to the noise (absorbing state) design and sampling heuristics. In this study, we propose the rehashing noise framework for discrete diffusion transformer, termed ReDDiT, to extend absorbing states and improve expressive capacity of discrete diffusion models. ReDDiT enriches the potential paths that latent variables can traverse during training with randomized multi-index corruption. The derived rehash sampler, which reverses the randomized absorbing paths, guarantees the diversity and low discrepancy of the generation process. These reformulations lead to more consistent and competitive generation quality, mitigating the need for heavily tuned randomness. Experiments show that ReDDiT significantly outperforms the baseline (reducing gFID from 6.18 to 1.61) and is on par with the continuous counterparts with higher efficiency.
View on arXiv@article{ma2025_2505.19656, title={ ReDDiT: Rehashing Noise for Discrete Visual Generation }, author={ Tianren Ma and Xiaosong Zhang and Boyu Yang and Junlan Feng and Qixiang Ye }, journal={arXiv preprint arXiv:2505.19656}, year={ 2025 } }