24
0

CORA: Coalitional Rational Advantage Decomposition for Multi-Agent Policy Gradients

Main:8 Pages
7 Figures
Bibliography:3 Pages
3 Tables
Appendix:6 Pages
Abstract

This work focuses on the credit assignment problem in cooperative multi-agent reinforcement learning (MARL). Sharing the global advantage among agents often leads to suboptimal policy updates as it fails to account for the distinct contributions of agents. Although numerous methods consider global or individual contributions for credit assignment, a detailed analysis at the coalition level remains lacking in many approaches. This work analyzes the over-updating problem during multi-agent policy updates from a coalition-level perspective. To address this issue, we propose a credit assignment method called Coalitional Rational Advantage Decomposition (CORA). CORA evaluates coalitional advantages via marginal contributions from all possible coalitions and decomposes advantages using the core solution from cooperative game theory, ensuring coalitional rationality. To reduce computational overhead, CORA employs random coalition sampling. Experiments on matrix games, differential games, and multi-agent collaboration benchmarks demonstrate that CORA outperforms strong baselines, particularly in tasks with multiple local optima. These findings highlight the importance of coalition-aware credit assignment for improving MARL performance.

View on arXiv
@article{ji2025_2506.04265,
  title={ CORA: Coalitional Rational Advantage Decomposition for Multi-Agent Policy Gradients },
  author={ Mengda Ji and Genjiu Xu and Liying Wang },
  journal={arXiv preprint arXiv:2506.04265},
  year={ 2025 }
}
Comments on this paper