39
0

Kaiwu: A Multimodal Manipulation Dataset and Framework for Robot Learning and Human-Robot Interaction

Abstract

Cutting-edge robot learning techniques including foundation models and imitation learning from humans all pose huge demands on large-scale and high-quality datasets which constitute one of the bottleneck in the general intelligent robot fields. This paper presents the Kaiwu multimodal dataset to address the missing real-world synchronized multimodal data problems in the sophisticated assembling scenario,especially with dynamics information and its fine-grained labelling. The dataset first provides an integration of human,environment and robot data collection framework with 20 subjects and 30 interaction objects resulting in totally 11,664 instances of integrated actions. For each of the demonstration,hand motions,operation pressures,sounds of the assembling process,multi-view videos, high-precision motion capture information,eye gaze with first-person videos,electromyography signals are all recorded. Fine-grained multi-level annotation based on absolute timestamp,and semantic segmentation labelling are performed. Kaiwu dataset aims to facilitate robot learning,dexterous manipulation,human intention investigation and human-robot collaboration research.

View on arXiv
@article{jiang2025_2503.05231,
  title={ Kaiwu: A Multimodal Manipulation Dataset and Framework for Robot Learning and Human-Robot Interaction },
  author={ Shuo Jiang and Haonan Li and Ruochen Ren and Yanmin Zhou and Zhipeng Wang and Bin He },
  journal={arXiv preprint arXiv:2503.05231},
  year={ 2025 }
}
Comments on this paper