ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02984
39
0

Learning Efficient Flocking Control based on Gibbs Random Fields

5 February 2025
Dengyu Zhang
Chenghao
Feng Xue
Qingrui Zhang
    AI4CE
ArXivPDFHTML
Abstract

Flocking control is essential for multi-robot systems in diverse applications, yet achieving efficient flocking in congested environments poses challenges regarding computation burdens, performance optimality, and motion safety. This paper addresses these challenges through a multi-agent reinforcement learning (MARL) framework built on Gibbs Random Fields (GRFs). With GRFs, a multi-robot system is represented by a set of random variables conforming to a joint probability distribution, thus offering a fresh perspective on flocking reward design. A decentralized training and execution mechanism, which enhances the scalability of MARL concerning robot quantity, is realized using a GRF-based credit assignment method. An action attention module is introduced to implicitly anticipate the motion intentions of neighboring robots, consequently mitigating potential non-stationarity issues in MARL. The proposed framework enables learning an efficient distributed control policy for multi-robot systems in challenging environments with success rate around 99%99\%99%, as demonstrated through thorough comparisons with state-of-the-art solutions in simulations and experiments. Ablation studies are also performed to validate the efficiency of different framework modules.

View on arXiv
@article{zhang2025_2502.02984,
  title={ Learning Efficient Flocking Control based on Gibbs Random Fields },
  author={ Dengyu Zhang and Chenghao and Feng Xue and Qingrui Zhang },
  journal={arXiv preprint arXiv:2502.02984},
  year={ 2025 }
}
Comments on this paper