End-to-end learning directly maps sensory inputs to actions, creating highly integrated and efficient policies for complex robotics tasks. However, such models often struggle to generalize beyond their training scenarios, limiting adaptability to new environments, tasks, and concepts. In this work, we investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies under unseen text instructions and visual distribution shifts. Our findings are synthesized in Flex (Fly lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors, generating spatially aware embeddings that integrate semantic and visual information. We demonstrate the effectiveness of this approach on a quadrotor fly-to-target task, where agents trained via behavior cloning on a small simulated dataset successfully generalize to real-world scenes with diverse novel goals and command formulations.
View on arXiv@article{chahine2025_2410.13002, title={ Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features }, author={ Makram Chahine and Alex Quach and Alaa Maalouf and Tsun-Hsuan Wang and Daniela Rus }, journal={arXiv preprint arXiv:2410.13002}, year={ 2025 } }