ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08627
27
0

Augmented Reality for RObots (ARRO): Pointing Visuomotor Policies Towards Visual Robustness

13 May 2025
Reihaneh Mirjalili
Tobias Jülg
Florian Walter
Wolfram Burgard
ArXivPDFHTML
Abstract

Visuomotor policies trained on human expert demonstrations have recently shown strong performance across a wide range of robotic manipulation tasks. However, these policies remain highly sensitive to domain shifts stemming from background or robot embodiment changes, which limits their generalization capabilities. In this paper, we present ARRO, a novel calibration-free visual representation that leverages zero-shot open-vocabulary segmentation and object detection models to efficiently mask out task-irrelevant regions of the scene without requiring additional training. By filtering visual distractors and overlaying virtual guides during both training and inference, ARRO improves robustness to scene variations and reduces the need for additional data collection. We extensively evaluate ARRO with Diffusion Policy on several tabletop manipulation tasks in both simulation and real-world environments, and further demonstrate its compatibility and effectiveness with generalist robot policies, such as Octo and OpenVLA. Across all settings in our evaluation, ARRO yields consistent performance gains, allows for selective masking to choose between different objects, and shows robustness even to challenging segmentation conditions. Videos showcasing our results are available at:this http URL

View on arXiv
@article{mirjalili2025_2505.08627,
  title={ Augmented Reality for RObots (ARRO): Pointing Visuomotor Policies Towards Visual Robustness },
  author={ Reihaneh Mirjalili and Tobias Jülg and Florian Walter and Wolfram Burgard },
  journal={arXiv preprint arXiv:2505.08627},
  year={ 2025 }
}
Comments on this paper