16

Low-Rank Adaptation for Critic Learning in Off-Policy Reinforcement Learning

Yuan Zhuang
Yuexin Bian
Sihong He
Jie Feng
Qing Su
Songyang Han
Jonathan Petit
Shihao Ji
Yuanyuan Shi
Fei Miao
Main:9 Pages
6 Figures
Bibliography:2 Pages
8 Tables
Appendix:8 Pages
Abstract

Scaling critic capacity is a promising direction for enhancing off-policy reinforcement learning (RL). However, larger critics are prone to overfitting and unstable in replay-buffer-based bootstrap training. This paper leverages Low-Rank Adaptation (LoRA) as a structural-sparsity regularizer for off-policy critics. Our approach freezes randomly initialized base matrices and solely optimizes low-rank adapters, thereby constraining critic updates to a low-dimensional subspace. Built on top of SimbaV2, we further develop a LoRA formulation, compatible with SimbaV2, that preserves its hyperspherical normalization geometry under frozen-backbone training. We evaluate our method with SAC and FastTD3 on DeepMind Control locomotion and IsaacLab robotics benchmarks. LoRA consistently achieves lower critic loss during training and stronger policy performance. Extensive experiments demonstrate that adaptive low-rank updates provide a simple, scalable, and effective structural regularization for critic learning in off-policy RL.

View on arXiv
Comments on this paper