ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16054
  4. Cited By
$π_{0.5}$: a Vision-Language-Action Model with Open-World Generalization

π0.5π_{0.5}π0.5​: a Vision-Language-Action Model with Open-World Generalization

22 April 2025
Physical Intelligence
Kevin Black
Noah Brown
James Darpinian
Karan Dhabalia
Danny Driess
Adnan Esmail
Michael Equi
Chelsea Finn
Niccolo Fusai
Manuel Y. Galliker
Dibya Ghosh
Lachy Groom
Karol Hausman
Brian Ichter
Szymon Jakubczak
Tim Jones
Liyiming Ke
Devin LeBlanc
Sergey Levine
Adrian Li-Bell
Mohith Mothukuri
Suraj Nair
Karl Pertsch
Allen Z. Ren
Lucy Xiaoyang Shi
Laura M. Smith
Jost Tobias Springenberg
Kyle Stachowicz
James Tanner
Q. Vuong
Homer Walke
Anna Walling
Haohuan Wang
Lili Yu
Ury Zhilinsky
    LM&Ro
    VLM
ArXivPDFHTML

Papers citing "$π_{0.5}$: a Vision-Language-Action Model with Open-World Generalization"

7 / 7 papers shown
Title
Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling
William Xie
Max Conway
Yutong Zhang
N. Correll
LM&Ro
LRM
30
0
0
14 May 2025
Pixel Motion as Universal Representation for Robot Control
Pixel Motion as Universal Representation for Robot Control
Kanchana Ranasinghe
Xiang Li
Cristina Mata
J. Park
Michael S. Ryoo
VGen
29
0
0
12 May 2025
X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
Prithwish Dan
K. Kedia
Angela Chao
Edward Weiyi Duan
Maximus Adrian Pace
Wei-Chiu Ma
Sanjiban Choudhury
23
0
0
11 May 2025
Multi-agent Embodied AI: Advances and Future Directions
Multi-agent Embodied AI: Advances and Future Directions
Zhaohan Feng
Ruiqi Xue
Lei Yuan
Yang Yu
Ning Ding
M. Liu
Bingzhao Gao
Jian-jun Sun
Gang Wang
AI4CE
54
1
0
08 May 2025
Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments
Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments
Pranav Guruprasad
Yangyue Wang
Sudipta Chowdhury
Harshvardhan Sikka
LM&Ro
VLM
140
0
0
08 May 2025
Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions
Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions
Cunxin Fan
Xiaosong Jia
Yihang Sun
Yixiao Wang
Jianglan Wei
...
Xiangyu Zhao
M. Tomizuka
Xue Yang
Junchi Yan
Mingyu Ding
LM&Ro
VLM
64
2
0
04 May 2025
DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General Robot Control
DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General Robot Control
Junjie Wen
Y. X. Zhu
Jinming Li
Zhibin Tang
Chaomin Shen
Feifei Feng
VLM
53
12
0
09 Feb 2025
1