ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.08705
  4. Cited By
Verbal Focus-of-Attention System for Learning-from-Observation
v1v2v3v4 (latest)

Verbal Focus-of-Attention System for Learning-from-Observation

17 July 2020
Naoki Wake
Iori Yanokura
Kazuhiro Sasabuchi
Katsushi Ikeuchi
ArXiv (abs)PDFHTML

Papers citing "Verbal Focus-of-Attention System for Learning-from-Observation"

10 / 10 papers shown
Title
Open-Vocabulary Action Localization with Iterative Visual Prompting
Open-Vocabulary Action Localization with Iterative Visual Prompting
Naoki Wake
Atsushi Kanehira
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
VLM
94
2
0
30 Aug 2024
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human
  Demonstration
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration
Naoki Wake
Atsushi Kanehira
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
LM&Ro
93
69
0
20 Nov 2023
ChatGPT Empowered Long-Step Robot Control in Various Environments: A
  Case Application
ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Naoki Wake
Atsushi Kanehira
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
LM&Ro
90
85
0
08 Apr 2023
Interactive Task Encoding System for Learning-from-Observation
Interactive Task Encoding System for Learning-from-Observation
Naoki Wake
Atsushi Kanehira
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
49
2
0
21 Dec 2022
Semantic constraints to represent common sense required in household
  actions for multi-modal Learning-from-observation robot
Semantic constraints to represent common sense required in household actions for multi-modal Learning-from-observation robot
Katsushi Ikeuchi
Naoki Wake
Riku Arakawa
Kazuhiro Sasabuchi
Jun Takamatsu
LM&Ro
67
17
0
03 Mar 2021
Text-driven object affordance for guiding grasp-type recognition in
  multimodal robot teaching
Text-driven object affordance for guiding grasp-type recognition in multimodal robot teaching
Naoki Wake
Daichi Saito
Kazuhiro Sasabuchi
Hideki Koike
Katsushi Ikeuchi
60
3
0
27 Feb 2021
Understanding Action Sequences based on Video Captioning for
  Learning-from-Observation
Understanding Action Sequences based on Video Captioning for Learning-from-Observation
Iori Yanokura
Naoki Wake
Kazuhiro Sasabuchi
Katsushi Ikeuchi
Masayuki Inaba
86
4
0
09 Dec 2020
Grasp-type Recognition Leveraging Object Affordance
Grasp-type Recognition Leveraging Object Affordance
Naoki Wake
Kazuhiro Sasabuchi
Katsushi Ikeuchi
47
15
0
26 Aug 2020
A Learning-from-Observation Framework: One-Shot Robot Teaching for
  Grasp-Manipulation-Release Household Operations
A Learning-from-Observation Framework: One-Shot Robot Teaching for Grasp-Manipulation-Release Household Operations
Naoki Wake
Riku Arakawa
Iori Yanokura
Takuya Kiyokawa
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
59
36
0
04 Aug 2020
Task-oriented Motion Mapping on Robots of Various Configuration using
  Body Role Division
Task-oriented Motion Mapping on Robots of Various Configuration using Body Role Division
Kazuhiro Sasabuchi
Naoki Wake
Katsushi Ikeuchi
175
24
0
17 Jul 2020
1