ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.05687
39
1

FlightBench: Benchmarking Learning-based Methods for Ego-vision-based Quadrotors Navigation

9 June 2024
Shu-Ang Yu
Chao Yu
Feng Gao
Yi Wu
Yu Wang
ArXivPDFHTML
Abstract

Ego-vision-based navigation in cluttered environments is crucial for mobile systems, particularly agile quadrotors. While learning-based methods have shown promise recently, head-to-head comparisons with cutting-edge optimization-based approaches are scarce, leaving open the question of where and to what extent they truly excel. In this paper, we introduce FlightBench, the first comprehensive benchmark that implements various learning-based methods for ego-vision-based navigation and evaluates them against mainstream optimization-based baselines using a broad set of performance metrics. More importantly, we develop a suite of criteria to assess scenario difficulty and design test cases that span different levels of difficulty based on these criteria. Our results show that while learning-based methods excel in high-speed flight and faster inference, they struggle with challenging scenarios like sharp corners or view occlusion. Analytical experiments validate the correlation between our difficulty criteria and flight performance. Moreover, we verify the trend in flight performance within real-world environments through full-pipeline and hardware-in-the-loop experiments. We hope this benchmark and these criteria will drive future advancements in learning-based navigation for ego-vision quadrotors. Code and documentation are available atthis https URL.

View on arXiv
@article{yu2025_2406.05687,
  title={ FlightBench: Benchmarking Learning-based Methods for Ego-vision-based Quadrotors Navigation },
  author={ Shu-Ang Yu and Chao Yu and Feng Gao and Yi Wu and Yu Wang },
  journal={arXiv preprint arXiv:2406.05687},
  year={ 2025 }
}
Comments on this paper