ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09893
26
0

LangPert: Detecting and Handling Task-level Perturbations for Robust Object Rearrangement

14 April 2025
Xu Yin
Min-Sung Yoon
Yuchi Huo
Kang Zhang
Sung-eui Yoon
ArXivPDFHTML
Abstract

Task execution for object rearrangement could be challenged by Task-Level Perturbations (TLP), i.e., unexpected object additions, removals, and displacements that can disrupt underlying visual policies and fundamentally compromise task feasibility and progress. To address these challenges, we present LangPert, a language-based framework designed to detect and mitigate TLP situations in tabletop rearrangement tasks. LangPert integrates a Visual Language Model (VLM) to comprehensively monitor policy's skill execution and environmental TLP, while leveraging the Hierarchical Chain-of-Thought (HCoT) reasoning mechanism to enhance the Large Language Model (LLM)'s contextual understanding and generate adaptive, corrective skill-execution plans. Our experimental results demonstrate that LangPert handles diverse TLP situations more effectively than baseline methods, achieving higher task completion rates, improved execution efficiency, and potential generalization to unseen scenarios.

View on arXiv
@article{yin2025_2504.09893,
  title={ LangPert: Detecting and Handling Task-level Perturbations for Robust Object Rearrangement },
  author={ Xu Yin and Min-Sung Yoon and Yuchi Huo and Kang Zhang and Sung-Eui Yoon },
  journal={arXiv preprint arXiv:2504.09893},
  year={ 2025 }
}
Comments on this paper