10
0

Light Aircraft Game : Basic Implementation and training results analysis

Main:6 Pages
10 Figures
Bibliography:1 Pages
1 Tables
Abstract

This paper investigates multi-agent reinforcement learning (MARL) in a partially observable, cooperative-competitive combat environment known as LAG. We describe the environment's setup, including agent actions, hierarchical controls, and reward design across different combat modes such as No Weapon and ShootMissile. Two representative algorithms are evaluated: HAPPO, an on-policy hierarchical variant of PPO, and HASAC, an off-policy method based on soft actor-critic. We analyze their training stability, reward progression, and inter-agent coordination capabilities. Experimental results show that HASAC performs well in simpler coordination tasks without weapons, while HAPPO demonstrates stronger adaptability in more dynamic and expressive scenarios involving missile combat. These findings provide insights into the trade-offs between on-policy and off-policy methods in multi-agent settings.

View on arXiv
@article{cao2025_2506.14164,
  title={ Light Aircraft Game : Basic Implementation and training results analysis },
  author={ Hanzhong Cao },
  journal={arXiv preprint arXiv:2506.14164},
  year={ 2025 }
}
Comments on this paper