ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13002
21
0

Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features

16 October 2024
Makram Chahine
Alex Quach
Alaa Maalouf
T. Wang
Daniela Rus
ArXivPDFHTML
Abstract

End-to-end learning directly maps sensory inputs to actions, creating highly integrated and efficient policies for complex robotics tasks. However, such models often struggle to generalize beyond their training scenarios, limiting adaptability to new environments, tasks, and concepts. In this work, we investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies under unseen text instructions and visual distribution shifts. Our findings are synthesized in Flex (Fly lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors, generating spatially aware embeddings that integrate semantic and visual information. We demonstrate the effectiveness of this approach on a quadrotor fly-to-target task, where agents trained via behavior cloning on a small simulated dataset successfully generalize to real-world scenes with diverse novel goals and command formulations.

View on arXiv
@article{chahine2025_2410.13002,
  title={ Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features },
  author={ Makram Chahine and Alex Quach and Alaa Maalouf and Tsun-Hsuan Wang and Daniela Rus },
  journal={arXiv preprint arXiv:2410.13002},
  year={ 2025 }
}
Comments on this paper