ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08330
49
0

KiteRunner: Language-Driven Cooperative Local-Global Navigation Policy with UAV Mapping in Outdoor Environments

11 March 2025
Shibo Huang
Chenfan Shi
Jian Yang
Hanlin Dong
Jinpeng Mi
Ke Li
J. Zhang
Miao Ding
Peidong Liang
Xiong You
Xian Wei
ArXivPDFHTML
Abstract

Autonomous navigation in open-world outdoor environments faces challenges in integrating dynamic conditions, long-distance spatial reasoning, and semantic understanding. Traditional methods struggle to balance local planning, global planning, and semantic task execution, while existing large language models (LLMs) enhance semantic comprehension but lack spatial reasoning capabilities. Although diffusion models excel in local optimization, they fall short in large-scale long-distance navigation. To address these gaps, this paper proposes KiteRunner, a language-driven cooperative local-global navigation strategy that combines UAV orthophoto-based global planning with diffusion model-driven local path generation for long-distance navigation in open-world scenarios. Our method innovatively leverages real-time UAV orthophotography to construct a global probability map, providing traversability guidance for the local planner, while integrating large models like CLIP and GPT to interpret natural language instructions. Experiments demonstrate that KiteRunner achieves 5.6% and 12.8% improvements in path efficiency over state-of-the-art methods in structured and unstructured environments, respectively, with significant reductions in human interventions and execution time.

View on arXiv
@article{huang2025_2503.08330,
  title={ KiteRunner: Language-Driven Cooperative Local-Global Navigation Policy with UAV Mapping in Outdoor Environments },
  author={ Shibo Huang and Chenfan Shi and Jian Yang and Hanlin Dong and Jinpeng Mi and Ke Li and Jianfeng Zhang and Miao Ding and Peidong Liang and Xiong You and Xian Wei },
  journal={arXiv preprint arXiv:2503.08330},
  year={ 2025 }
}
Comments on this paper