ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.08576
  4. Cited By
Learning to Act from Actionless Videos through Dense Correspondences

Learning to Act from Actionless Videos through Dense Correspondences

12 October 2023
Po-Chen Ko
Jiayuan Mao
Yilun Du
Shao-Hua Sun
Josh Tenenbaum
ArXivPDFHTML

Papers citing "Learning to Act from Actionless Videos through Dense Correspondences"

23 / 23 papers shown
Title
Symbolically-Guided Visual Plan Inference from Uncurated Video Data
Symbolically-Guided Visual Plan Inference from Uncurated Video Data
Wenyan Yang
Ahmet Tikna
Yi Zhao
Yuying Zhang
Luigi Palopoli
Marco Roveri
Joni Pajarinen
VGen
26
0
0
13 May 2025
Pixel Motion as Universal Representation for Robot Control
Pixel Motion as Universal Representation for Robot Control
Kanchana Ranasinghe
Xiang Li
Cristina Mata
J. Park
Michael S. Ryoo
VGen
32
0
0
12 May 2025
VISTA: Generative Visual Imagination for Vision-and-Language Navigation
VISTA: Generative Visual Imagination for Vision-and-Language Navigation
Yanjia Huang
Mingyang Wu
Renjie Li
Zhengzhong Tu
LM&Ro
41
0
0
09 May 2025
CLAM: Continuous Latent Action Models for Robot Learning from Unlabeled Demonstrations
CLAM: Continuous Latent Action Models for Robot Learning from Unlabeled Demonstrations
Anthony Liang
Pavel Czempin
Matthew Hong
Yutai Zhou
Erdem Biyik
Stephen Tu
47
0
0
08 May 2025
Generative AI in Embodied Systems: System-Level Analysis of Performance, Efficiency and Scalability
Generative AI in Embodied Systems: System-Level Analysis of Performance, Efficiency and Scalability
Zishen Wan
Jiayi Qian
Yuhang Du
Jason J. Jabbour
Yilun Du
Yang Katie Zhao
A. Raychowdhury
Tushar Krishna
Vijay Janapa Reddi
LM&Ro
91
0
0
26 Apr 2025
Solving New Tasks by Adapting Internet Video Knowledge
Solving New Tasks by Adapting Internet Video Knowledge
Calvin Luo
Zilai Zeng
Yilun Du
Chen Sun
27
1
0
21 Apr 2025
AdaWorld: Learning Adaptable World Models with Latent Actions
AdaWorld: Learning Adaptable World Models with Latent Actions
Shenyuan Gao
Siyuan Zhou
Yilun Du
Jun Zhang
Chuang Gan
VGen
62
3
0
24 Mar 2025
Unified Video Action Model
Unified Video Action Model
Shuang Li
Yihuai Gao
Dorsa Sadigh
Shuran Song
VGen
53
2
0
28 Feb 2025
VILP: Imitation Learning with Latent Video Planning
VILP: Imitation Learning with Latent Video Planning
Zhengtong Xu
Qiang Qiu
Yu She
VGen
78
1
0
03 Feb 2025
Motion Tracks: A Unified Representation for Human-Robot Transfer in Few-Shot Imitation Learning
Motion Tracks: A Unified Representation for Human-Robot Transfer in Few-Shot Imitation Learning
Juntao Ren
Priya Sundaresan
Dorsa Sadigh
Sanjiban Choudhury
Jeannette Bohg
37
14
0
13 Jan 2025
Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning
Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning
Jiange Yang
Haoyi Zhu
Yunhong Wang
Gangshan Wu
Tong He
Limin Wang
103
2
0
21 Nov 2024
Grounding Video Models to Actions through Goal Conditioned Exploration
Grounding Video Models to Actions through Goal Conditioned Exploration
Yunhao Luo
Yilun Du
LM&Ro
VGen
85
1
0
11 Nov 2024
SPOT: SE(3) Pose Trajectory Diffusion for Object-Centric Manipulation
SPOT: SE(3) Pose Trajectory Diffusion for Object-Centric Manipulation
Cheng-Chun Hsu
Bowen Wen
Jie Xu
Yashraj S. Narang
Xiaolong Wang
Yuke Zhu
Joydeep Biswas
Stan Birchfield
DiffM
41
8
0
01 Nov 2024
Latent Action Pretraining from Videos
Latent Action Pretraining from Videos
Seonghyeon Ye
Joel Jang
Byeongguk Jeon
Sejune Joo
Jianwei Yang
...
Kimin Lee
J. Gao
Luke Zettlemoyer
Dieter Fox
Minjoon Seo
35
27
0
15 Oct 2024
VideoAgent: Self-Improving Video Generation
VideoAgent: Self-Improving Video Generation
Achint Soni
Sreyas Venkataraman
Abhranil Chandra
Sebastian Fischmeister
Percy Liang
Bo Dai
Sherry Yang
LM&Ro
VGen
58
7
0
14 Oct 2024
Embodiment-Agnostic Action Planning via Object-Part Scene Flow
Embodiment-Agnostic Action Planning via Object-Part Scene Flow
Weiliang Tang
Jia-Hui Pan
Wei Zhan
Jianshu Zhou
Huaxiu Yao
Yun-Hui Liu
M. Tomizuka
Mingyu Ding
Chi-Wing Fu
50
0
0
16 Sep 2024
ARDuP: Active Region Video Diffusion for Universal Policies
ARDuP: Active Region Video Diffusion for Universal Policies
Shuaiyi Huang
Mara Levy
Zhenyu Jiang
Anima Anandkumar
Yuke Zhu
Linxi Fan
De-An Huang
Abhinav Shrivastava
VGen
50
2
0
19 Jun 2024
Vista: A Generalizable Driving World Model with High Fidelity and
  Versatile Controllability
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
Shenyuan Gao
Jiazhi Yang
Li Chen
Kashyap Chitta
Yihang Qiu
Andreas Geiger
Jun Zhang
Hongyang Li
71
75
0
27 May 2024
Diffusion-Reward Adversarial Imitation Learning
Diffusion-Reward Adversarial Imitation Learning
Chun-Mao Lai
Hsiang-Chun Wang
Ping-Chun Hsieh
Yu-Chiang Frank Wang
Min-Hung Chen
Shao-Hua Sun
37
8
0
25 May 2024
COMBO: Compositional World Models for Embodied Multi-Agent Cooperation
COMBO: Compositional World Models for Embodied Multi-Agent Cooperation
Hongxin Zhang
Zeyuan Wang
Qiushi Lyu
Zheyuan Zhang
Sunli Chen
Tianmin Shu
Yilun Du
Kwonjoon Lee
Yilun Du
Chuang Gan
48
12
0
16 Apr 2024
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans
  on Youtube
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
Aravind Sivakumar
Kenneth Shaw
Deepak Pathak
124
101
0
21 Feb 2022
Adversarial Imitation Learning from Video using a State Observer
Adversarial Imitation Learning from Video using a State Observer
Haresh Karnan
Garrett A. Warnell
F. Torabi
Peter Stone
GAN
29
13
0
01 Feb 2022
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain
  Datasets
Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets
F. Ebert
Yanlai Yang
Karl Schmeckpeper
Bernadette Bucher
G. Georgakis
Kostas Daniilidis
Chelsea Finn
Sergey Levine
169
219
0
27 Sep 2021
1