9
0

Flow-Based Policy for Online Reinforcement Learning

Main:10 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Appendix:6 Pages
Abstract

We present \textbf{FlowRL}, a novel framework for online reinforcement learning that integrates flow-based policy representation with Wasserstein-2-regularized optimization. We argue that in addition to training signals, enhancing the expressiveness of the policy class is crucial for the performance gains in RL. Flow-based generative models offer such potential, excelling at capturing complex, multimodal action distributions. However, their direct application in online RL is challenging due to a fundamental objective mismatch: standard flow training optimizes for static data imitation, while RL requires value-based policy optimization through a dynamic buffer, leading to difficult optimization landscapes. FlowRL first models policies via a state-dependent velocity field, generating actions through deterministic ODE integration from noise. We derive a constrained policy search objective that jointly maximizes Q through the flow policy while bounding the Wasserstein-2 distance to a behavior-optimal policy implicitly derived from the replay buffer. This formulation effectively aligns the flow optimization with the RL objective, enabling efficient and value-aware policy learning despite the complexity of the policy class. Empirical evaluations on DMControl and Humanoidbench demonstrate that FlowRL achieves competitive performance in online reinforcement learning benchmarks.

View on arXiv
@article{lv2025_2506.12811,
  title={ Flow-Based Policy for Online Reinforcement Learning },
  author={ Lei Lv and Yunfei Li and Yu Luo and Fuchun Sun and Tao Kong and Jiafeng Xu and Xiao Ma },
  journal={arXiv preprint arXiv:2506.12811},
  year={ 2025 }
}
Comments on this paper