ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.19457
44
1

A Parameter-Efficient Tuning Framework for Language-guided Object Grounding and Robot Grasping

28 September 2024
Houjian Yu
Mingen Li
Alireza Rezazadeh
Yang Yang
Changhyun Choi
ArXivPDFHTML
Abstract

The language-guided robot grasping task requires a robot agent to integrate multimodal information from both visual and linguistic inputs to predict actions for target-driven grasping. While recent approaches utilizing Multimodal Large Language Models (MLLMs) have shown promising results, their extensive computation and data demands limit the feasibility of local deployment and customization. To address this, we propose a novel CLIP-based multimodal parameter-efficient tuning (PET) framework designed for three language-guided object grounding and grasping tasks: (1) Referring Expression Segmentation (RES), (2) Referring Grasp Synthesis (RGS), and (3) Referring Grasp Affordance (RGA). Our approach introduces two key innovations: a bi-directional vision-language adapter that aligns multimodal inputs for pixel-level language understanding and a depth fusion branch that incorporates geometric cues to facilitate robot grasping predictions. Experiment results demonstrate superior performance in the RES object grounding task compared with existing CLIP-based full-model tuning or PET approaches. In the RGS and RGA tasks, our model not only effectively interprets object attributes based on simple language descriptions but also shows strong potential for comprehending complex spatial reasoning scenarios, such as multiple identical objects present in the workspace. Project page:this https URL

View on arXiv
@article{yu2025_2409.19457,
  title={ A Parameter-Efficient Tuning Framework for Language-guided Object Grounding and Robot Grasping },
  author={ Houjian Yu and Mingen Li and Alireza Rezazadeh and Yang Yang and Changhyun Choi },
  journal={arXiv preprint arXiv:2409.19457},
  year={ 2025 }
}
Comments on this paper