The capture of flying MAVs (micro aerial vehicles) has garnered increasing research attention due to its intriguing challenges and promising applications. Despite recent advancements, a key limitation of existing work is that capture strategies are often relatively simple and constrained by platform performance. This paper addresses control strategies capable of capturing high-maneuverability targets. The unique challenge of achieving target capture under unstable conditions distinguishes this task from traditional pursuit-evasion and guidance problems. In this study, we transition from larger MAV platforms to a specially designed, compact capture MAV equipped with a custom launching device while maintaining high maneuverability. We explore both time-optimal planning (TOP) and reinforcement learning (RL) methods. Simulations demonstrate that TOP offers highly maneuverable and shorter trajectories, while RL excels in real-time adaptability and stability. Moreover, the RL method has been tested in real-world scenarios, successfully achieving target capture even in unstable states.
View on arXiv@article{zheng2025_2503.06578, title={ Non-Equilibrium MAV-Capture-MAV via Time-Optimal Planning and Reinforcement Learning }, author={ Canlun Zheng and Zhanyu Guo and Zikang Yin and Chunyu Wang and Zhikun Wang and Shiyu Zhao }, journal={arXiv preprint arXiv:2503.06578}, year={ 2025 } }