7
2

From Pixels to Graphs: using Scene and Knowledge Graphs for HD-EPIC VQA Challenge

Abstract

This report presents SceneNet and KnowledgeNet, our approaches developed for the HD-EPIC VQA Challenge 2025. SceneNet leverages scene graphs generated with a multi-modal large language model (MLLM) to capture fine-grained object interactions, spatial relationships, and temporally grounded events. In parallel, KnowledgeNet incorporates ConceptNet's external commonsense knowledge to introduce high-level semantic connections between entities, enabling reasoning beyond directly observable visual evidence. Each method demonstrates distinct strengths across the seven categories of the HD-EPIC benchmark, and their combination within our framework results in an overall accuracy of 44.21% on the challenge, highlighting its effectiveness for complex egocentric VQA tasks.

View on arXiv
@article{taluzzi2025_2506.08553,
  title={ From Pixels to Graphs: using Scene and Knowledge Graphs for HD-EPIC VQA Challenge },
  author={ Agnese Taluzzi and Davide Gesualdi and Riccardo Santambrogio and Chiara Plizzari and Francesca Palermo and Simone Mentasti and Matteo Matteucci },
  journal={arXiv preprint arXiv:2506.08553},
  year={ 2025 }
}
Comments on this paper