25
0

HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models

Trishna Chakraborty
Udita Ghosh
Xiaopan Zhang
Fahim Faisal Niloy
Yue Dong
Jiachen Li
Amit K. Roy-Chowdhury
Chengyu Song
Main:8 Pages
5 Figures
Bibliography:4 Pages
15 Tables
Appendix:5 Pages
Abstract

Large language models (LLMs) are increasingly being adopted as the cognitive core of embodied agents. However, inherited hallucinations, which stem from failures to ground user instructions in the observed physical environment, can lead to navigation errors, such as searching for a refrigerator that does not exist. In this paper, we present the first systematic study of hallucinations in LLM-based embodied agents performing long-horizon tasks under scene-task inconsistencies. Our goal is to understand to what extent hallucinations occur, what types of inconsistencies trigger them, and how current models respond. To achieve these goals, we construct a hallucination probing set by building on an existing benchmark, capable of inducing hallucination rates up to 40x higher than base prompts. Evaluating 12 models across two simulation environments, we find that while models exhibit reasoning, they fail to resolve scene-task inconsistencies-highlighting fundamental limitations in handling infeasible tasks. We also provide actionable insights on ideal model behavior for each scenario, offering guidance for developing more robust and reliable planning strategies.

View on arXiv
@article{chakraborty2025_2506.15065,
  title={ HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models },
  author={ Trishna Chakraborty and Udita Ghosh and Xiaopan Zhang and Fahim Faisal Niloy and Yue Dong and Jiachen Li and Amit K. Roy-Chowdhury and Chengyu Song },
  journal={arXiv preprint arXiv:2506.15065},
  year={ 2025 }
}
Comments on this paper