ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11918
59
1

Sketch-to-Skill: Bootstrapping Robot Learning with Human Drawn Trajectory Sketches

14 March 2025
Peihong Yu
Amisha Bhaskar
Anukriti Singh
Zahiruddin Mahammad
Pratap Tokekar
ArXivPDFHTML
Abstract

Training robotic manipulation policies traditionally requires numerous demonstrations and/or environmental rollouts. While recent Imitation Learning (IL) and Reinforcement Learning (RL) methods have reduced the number of required demonstrations, they still rely on expert knowledge to collect high-quality data, limiting scalability and accessibility. We propose Sketch-to-Skill, a novel framework that leverages human-drawn 2D sketch trajectories to bootstrap and guide RL for robotic manipulation. Our approach extends beyond previous sketch-based methods, which were primarily focused on imitation learning or policy conditioning, limited to specific trained tasks. Sketch-to-Skill employs a Sketch-to-3D Trajectory Generator that translates 2D sketches into 3D trajectories, which are then used to autonomously collect initial demonstrations. We utilize these sketch-generated demonstrations in two ways: to pre-train an initial policy through behavior cloning and to refine this policy through RL with guided exploration. Experimental results demonstrate that Sketch-to-Skill achieves ~96% of the performance of the baseline model that leverages teleoperated demonstration data, while exceeding the performance of a pure reinforcement learning policy by ~170%, only from sketch inputs. This makes robotic manipulation learning more accessible and potentially broadens its applications across various domains.

View on arXiv
@article{yu2025_2503.11918,
  title={ Sketch-to-Skill: Bootstrapping Robot Learning with Human Drawn Trajectory Sketches },
  author={ Peihong Yu and Amisha Bhaskar and Anukriti Singh and Zahiruddin Mahammad and Pratap Tokekar },
  journal={arXiv preprint arXiv:2503.11918},
  year={ 2025 }
}
Comments on this paper