62
0

Enhanced DACER Algorithm with High Diffusion Efficiency

Main:10 Pages
7 Figures
Bibliography:3 Pages
2 Tables
Appendix:5 Pages
Abstract

Due to their expressive capacity, diffusion models have shown great promise in offline RL and imitation learning. Diffusion Actor-Critic with Entropy Regulator (DACER) extended this capability to online RL by using the reverse diffusion process as a policy approximator, trained end-to-end with policy gradient methods, achieving strong performance. However, this comes at the cost of requiring many diffusion steps, which significantly hampers training efficiency, while directly reducing the steps leads to noticeable performance degradation. Critically, the lack of inference efficiency becomes a significant bottleneck for applying diffusion policies in real-time online RL settings. To improve training and inference efficiency while maintaining or even enhancing performance, we propose a Q-gradient field objective as an auxiliary optimization target to guide the denoising process at each diffusion step. Nonetheless, we observe that the independence of the Q-gradient field from the diffusion time step negatively impacts the performance of the diffusion policy. To address this, we introduce a temporal weighting mechanism that enables the model to efficiently eliminate large-scale noise in the early stages and refine actions in the later stages. Experimental results on MuJoCo benchmarks and several multimodal tasks demonstrate that the DACER2 algorithm achieves state-of-the-art performance in most MuJoCo control tasks with only five diffusion steps, while also exhibiting stronger multimodality compared to DACER.

View on arXiv
@article{wang2025_2505.23426,
  title={ Enhanced DACER Algorithm with High Diffusion Efficiency },
  author={ Yinuo Wang and Mining Tan and Wenjun Zou and Haotian Lin and Xujie Song and Wenxuan Wang and Tong Liu and Likun Wang and Guojian Zhan and Tianze Zhu and Shiqi Liu and Jingliang Duan and Shengbo Eben Li },
  journal={arXiv preprint arXiv:2505.23426},
  year={ 2025 }
}
Comments on this paper