ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01196
52
0

OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation

1 June 2025
Ishika Singh
Ankit Goyal
Stan Birchfield
Dieter Fox
Animesh Garg
Valts Blukis
    LM&Ro
ArXiv (abs)PDFHTML
Main:10 Pages
7 Figures
Bibliography:1 Pages
7 Tables
Appendix:6 Pages
Abstract

We introduce OG-VLA, a novel architecture and learning framework that combines the generalization strengths of Vision Language Action models (VLAs) with the robustness of 3D-aware policies. We address the challenge of mapping natural language instructions and multi-view RGBD observations to quasi-static robot actions. 3D-aware robot policies achieve state-of-the-art performance on precise robot manipulation tasks, but struggle with generalization to unseen instructions, scenes, and objects. On the other hand, VLAs excel at generalizing across instructions and scenes, but can be sensitive to camera and robot pose variations. We leverage prior knowledge embedded in language and vision foundation models to improve generalization of 3D-aware keyframe policies. OG-VLA projects input observations from diverse views into a point cloud which is then rendered from canonical orthographic views, ensuring input view invariance and consistency between input and output spaces. These canonical views are processed with a vision backbone, a Large Language Model (LLM), and an image diffusion model to generate images that encode the next position and orientation of the end-effector on the input scene. Evaluations on the Arnold and Colosseum benchmarks demonstrate state-of-the-art generalization to unseen environments, with over 40% relative improvements while maintaining robust performance in seen settings. We also show real-world adaption in 3 to 5 demonstrations along with strong generalization. Videos and resources atthis https URL

View on arXiv
@article{singh2025_2506.01196,
  title={ OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation },
  author={ Ishika Singh and Ankit Goyal and Stan Birchfield and Dieter Fox and Animesh Garg and Valts Blukis },
  journal={arXiv preprint arXiv:2506.01196},
  year={ 2025 }
}
Comments on this paper