Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.14066
Cited By
KALIE: Fine-Tuning Vision-Language Models for Open-World Manipulation without Robot Data
21 September 2024
Grace Tang
Swetha Rajkumar
Yifei Zhou
Homer Walke
Sergey Levine
Kuan Fang
LM&Ro
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"KALIE: Fine-Tuning Vision-Language Models for Open-World Manipulation without Robot Data"
5 / 5 papers shown
Title
ManipBench: Benchmarking Vision-Language Models for Low-Level Robot Manipulation
Enyu Zhao
Vedant Raval
Hejia Zhang
Jiageng Mao
Zeyu Shangguan
S. Nikolaidis
Y. Wang
Daniel Seita
LM&Ro
CoGe
45
0
0
14 May 2025
D-CODA: Diffusion for Coordinated Dual-Arm Data Augmentation
Isabella Liu
Jason Chen
Gaurav Sukhatme
Daniel Seita
52
0
0
08 May 2025
HybridGen: VLM-Guided Hybrid Planning for Scalable Data Generation of Imitation Learning
Wensheng Wang
Ning Tan
LM&Ro
OffRL
57
0
0
17 Mar 2025
A Real-to-Sim-to-Real Approach to Robotic Manipulation with VLM-Generated Iterative Keypoint Rewards
Shivansh Patel
Xinchen Yin
Wenlong Huang
Shubham Garg
H. Nayyeri
Li Fei-Fei
Svetlana Lazebnik
Y. Li
92
0
0
12 Feb 2025
Blox-Net: Generative Design-for-Robot-Assembly Using VLM Supervision, Physics Simulation, and a Robot with Reset
Andrew Goldberg
Kavish Kondap
Tianshuang Qiu
Zehan Ma
Letian Fu
Justin Kerr
Huang Huang
Kaiyuan Chen
Kuan Fang
Ken Goldberg
34
4
0
25 Sep 2024
1