117
0

LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models

Abstract

While Masked Diffusion Models (MDMs), such as LLaDA, present a promising paradigm for language modeling, there has been relatively little effort in aligning these models with human preferences via reinforcement learning. The challenge primarily arises from the high variance in Evidence Lower Bound (ELBO)-based likelihood estimates required for preference optimization. To address this issue, we propose Variance-Reduced Preference Optimization (VRPO), a framework that formally analyzes the variance of ELBO estimators and derives bounds on both the bias and variance of preference optimization gradients. Building on this theoretical foundation, we introduce unbiased variance reduction strategies, including optimal Monte Carlo budget allocation and antithetic sampling, that significantly improve the performance of MDM alignment. We demonstrate the effectiveness of VRPO by applying it to LLaDA, and the resulting model, LLaDA 1.5, outperforms its SFT-only predecessor consistently and significantly across mathematical (GSM8K +4.7), code (HumanEval +3.0, MBPP +1.8), and alignment benchmarks (IFEval +4.0, Arena-Hard +4.3). Furthermore, LLaDA 1.5 demonstrates a highly competitive mathematical performance compared to strong language MDMs and ARMs. Project page:this https URL.

View on arXiv
@article{zhu2025_2505.19223,
  title={ LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models },
  author={ Fengqi Zhu and Rongzhen Wang and Shen Nie and Xiaolu Zhang and Chunwei Wu and Jun Hu and Jun Zhou and Jianfei Chen and Yankai Lin and Ji-Rong Wen and Chongxuan Li },
  journal={arXiv preprint arXiv:2505.19223},
  year={ 2025 }
}
Comments on this paper