Exploiting inter-agent coupling information for efficient reinforcement learning of cooperative LQR

Developing scalable and efficient reinforcement learning algorithms for cooperative multi-agent control has received significant attention over the past years. Existing literature has proposed inexact decompositions of local Q-functions based on empirical information structures between the agents. In this paper, we exploit inter-agent coupling information and propose a systematic approach to exactly decompose the local Q-function of each agent. We develop an approximate least square policy iteration algorithm based on the proposed decomposition and identify two architectures to learn the local Q-function for each agent. We establish that the worst-case sample complexity of the decomposition is equal to the centralized case and derive necessary and sufficient graphical conditions on the inter-agent couplings to achieve better sample efficiency. We demonstrate the improved sample efficiency and computational efficiency on numerical examples.
View on arXiv@article{syed2025_2504.20927, title={ Exploiting inter-agent coupling information for efficient reinforcement learning of cooperative LQR }, author={ Shahbaz P Qadri Syed and He Bai }, journal={arXiv preprint arXiv:2504.20927}, year={ 2025 } }