GraphPad: Inference-Time 3D Scene Graph Updates for Embodied Question Answering

Structured scene representations are a core component of embodied agents, helping to consolidate raw sensory streams into readable, modular, and searchable formats. Due to their high computational overhead, many approaches build such representations in advance of the task. However, when the task specifications change, such static approaches become inadequate as they may miss key objects, spatial relations, and details. We introduce GraphPad, a modifiable structured memory that an agent can tailor to the needs of the task through API calls. It comprises a mutable scene graph representing the environment, a navigation log indexing frame-by-frame content, and a scratchpad for task-specific notes. Together, GraphPad serves as a dynamic workspace that remains complete, current, and aligned with the agent's immediate understanding of the scene and its task. On the OpenEQA benchmark, GraphPad attains 55.3%, a +3.0% increase over an image-only baseline using the same vision-language model, while operating with five times fewer input frames. These results show that allowing online, language-driven refinement of 3-D memory yields more informative representations without extra training or data collection.
View on arXiv@article{ali2025_2506.01174, title={ GraphPad: Inference-Time 3D Scene Graph Updates for Embodied Question Answering }, author={ Muhammad Qasim Ali and Saeejith Nair and Alexander Wong and Yuchen Cui and Yuhao Chen }, journal={arXiv preprint arXiv:2506.01174}, year={ 2025 } }