58
0

Decision Flow Policy Optimization

Main:9 Pages
3 Figures
Bibliography:6 Pages
4 Tables
Appendix:9 Pages
Abstract

In recent years, generative models have shown remarkable capabilities across diverse fields, including images, videos, language, and decision-making. By applying powerful generative models such as flow-based models to reinforcement learning, we can effectively model complex multi-modal action distributions and achieve superior robotic control in continuous action spaces, surpassing the limitations of single-modal action distributions with traditional Gaussian-based policies. Previous methods usually adopt the generative models as behavior models to fit state-conditioned action distributions from datasets, with policy optimization conducted separately through additional policies using value-based sample weighting or gradient-based updates. However, this separation prevents the simultaneous optimization of multi-modal distribution fitting and policy improvement, ultimately hindering the training of models and degrading the performance. To address this issue, we propose Decision Flow, a unified framework that integrates multi-modal action distribution modeling and policy optimization. Specifically, our method formulates the action generation procedure of flow-based models as a flow decision-making process, where each action generation step corresponds to one flow decision. Consequently, our method seamlessly optimizes the flow policy while capturing multi-modal action distributions. We provide rigorous proofs of Decision Flow and validate the effectiveness through extensive experiments across dozens of offline RL environments. Compared with established offline RL baselines, the results demonstrate that our method achieves or matches the SOTA performance.

View on arXiv
@article{hu2025_2505.20350,
  title={ Decision Flow Policy Optimization },
  author={ Jifeng Hu and Sili Huang and Siyuan Guo and Zhaogeng Liu and Li Shen and Lichao Sun and Hechang Chen and Yi Chang and Dacheng Tao },
  journal={arXiv preprint arXiv:2505.20350},
  year={ 2025 }
}
Comments on this paper