Broad Critic Deep Actor Reinforcement Learning for Continuous Control

In the domain of continuous control, deep reinforcement learning (DRL) demonstrates promising results. However, the dependence of DRL on deep neural networks (DNNs) results in the demand for extensive data and increased computational cost. To address this issue, a novel hybrid actor-critic reinforcement learning (RL) framework is introduced. The proposed framework integrates the broad learning system (BLS) with DNN, aiming to merge the strengths of both distinct architectural paradigms. Specifically, the critic network employs BLS for rapid value estimation via ridge regression, while the actor network retains the DNN structure to optimize policy gradients. This hybrid design is generalizable and can enhance existing actor-critic algorithms. To demonstrate its versatility, the proposed framework is integrated into three widely used actor-critic algorithms -- deep deterministic policy gradient (DDPG), soft actor-critic (SAC), and twin delayed DDPG (TD3), resulting in BLS-augmented variants. Experimental results reveal that all BLS-enhanced versions surpass their original counterparts in terms of training efficiency and accuracy. These improvements highlight the suitability of the proposed framework for real-time control scenarios, where computational efficiency and rapid adaptation are critical.
View on arXiv@article{thalagala2025_2411.15806, title={ Broad Critic Deep Actor Reinforcement Learning for Continuous Control }, author={ Shiron Thalagala and Pak Kin Wong and Xiaozheng Wang and Tianang Sun }, journal={arXiv preprint arXiv:2411.15806}, year={ 2025 } }