ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.03277
  4. Cited By
K-VIL: Keypoints-based Visual Imitation Learning

K-VIL: Keypoints-based Visual Imitation Learning

7 September 2022
Jianfeng Gao
Z. Tao
Noémie Jaquier
Tamim Asfour
    VGen
    SSL
ArXivPDFHTML

Papers citing "K-VIL: Keypoints-based Visual Imitation Learning"

13 / 13 papers shown
Title
ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow
ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow
Changhe Chen
Quantao Yang
Xiaohao Xu
Nima Fazeli
Olov Andersson
26
0
0
02 May 2025
Robotic Visual Instruction
Robotic Visual Instruction
Y. Li
Ziyang Gong
Hao Li
Xiaoqi Huang
Haolan Kang
Guangping Bai
Xianzheng Ma
LM&Ro
76
0
0
01 May 2025
FUNCTO: Function-Centric One-Shot Imitation Learning for Tool Manipulation
FUNCTO: Function-Centric One-Shot Imitation Learning for Tool Manipulation
Chao Tang
Anxing Xiao
Yuhong Deng
Tianrun Hu
Wenlong Dong
Hanbo Zhang
David Hsu
Hong Zhang
73
2
0
24 Feb 2025
Out-of-Distribution Recovery with Object-Centric Keypoint Inverse Policy for Visuomotor Imitation Learning
Out-of-Distribution Recovery with Object-Centric Keypoint Inverse Policy for Visuomotor Imitation Learning
George Jiayuan Gao
Tianyu Li
Nadia Figueroa
41
0
0
05 Nov 2024
RECON: Reducing Causal Confusion with Human-Placed Markers
RECON: Reducing Causal Confusion with Human-Placed Markers
Robert Ramirez Sanchez
Heramb Nemlekar
Shahabedin Sagheb
Cara M. Nunez
Dylan P. Losey
CML
48
1
0
20 Sep 2024
BiKC: Keypose-Conditioned Consistency Policy for Bimanual Robotic
  Manipulation
BiKC: Keypose-Conditioned Consistency Policy for Bimanual Robotic Manipulation
Dongjie Yu
Hang Xu
Yizhou Chen
Yi Ren
Jia Pan
34
3
0
14 Jun 2024
AutoGPT+P: Affordance-based Task Planning with Large Language Models
AutoGPT+P: Affordance-based Task Planning with Large Language Models
Timo Birr
Christoph Pohl
Abdelrahman Younes
Tamim Asfour
LM&Ro
26
15
0
16 Feb 2024
Bridging Low-level Geometry to High-level Concepts in Visual Servoing of
  Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language
  Models
Bridging Low-level Geometry to High-level Concepts in Visual Servoing of Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language Models
Chen Jiang
Martin Jägersand
37
1
0
05 Oct 2023
CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual
  Servoing Control with CLIP-driven Referring Expression Segmentation
CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual Servoing Control with CLIP-driven Referring Expression Segmentation
Chen Jiang
Yuchen Yang
Martin Jägersand
29
1
0
17 Sep 2023
One-Shot Transfer of Affordance Regions? AffCorrs!
One-Shot Transfer of Affordance Regions? AffCorrs!
Denis Hadjivelichkov
Sicelukwanda Zwane
M. Deisenroth
Lourdes Agapito
Dimitrios Kanoulas
37
35
0
15 Sep 2022
Adversarial Imitation Learning from Video using a State Observer
Adversarial Imitation Learning from Video using a State Observer
Haresh Karnan
Garrett A. Warnell
F. Torabi
Peter Stone
GAN
29
13
0
01 Feb 2022
Learning Periodic Tasks from Human Demonstrations
Learning Periodic Tasks from Human Demonstrations
Jingyun Yang
Junwu Zhang
Connor Settle
Akshara Rai
Rika Antonova
Jeannette Bohg
104
24
0
28 Sep 2021
A Geometric Perspective on Visual Imitation Learning
A Geometric Perspective on Visual Imitation Learning
Jun Jin
Laura Petrich
Masood Dehghan
Martin Jägersand
69
17
0
05 Mar 2020
1