BiFold: Bimanual Cloth Folding with Language Guidance

Cloth folding is a complex task due to the inevitable self-occlusions of clothes, their complicated dynamics, and the disparate materials, geometries, and textures that garments can have. In this work, we learn folding actions conditioned on text commands. Translating high-level, abstract instructions into precise robotic actions requires sophisticated language understanding and manipulation capabilities. To do that, we leverage a pre-trained vision-language model and repurpose it to predict manipulation actions. Our model, BiFold, can take context into account and achieves state-of-the-art performance on an existing language-conditioned folding benchmark. To address the lack of annotated bimanual folding data, we introduce a novel dataset with automatically parsed actions and language-aligned instructions, enabling better learning of text-conditioned manipulation. BiFold attains the best performance on our dataset and demonstrates strong generalization to new instructions, garments, and environments.
View on arXiv@article{barbany2025_2501.16458, title={ BiFold: Bimanual Cloth Folding with Language Guidance }, author={ Oriol Barbany and Adrià Colomé and Carme Torras }, journal={arXiv preprint arXiv:2501.16458}, year={ 2025 } }