18
0

Open World Scene Graph Generation using Vision Language Models

Abstract

Scene-Graph Generation (SGG) seeks to recognize objects in an image and distill their salient pairwise relationships. Most methods depend on dataset-specific supervision to learn the variety of interactions, restricting their usefulness in open-world settings, involving novel objects and/or relations. Even methods that leverage large Vision Language Models (VLMs) typically require benchmark-specific fine-tuning. We introduce Open-World SGG, a training-free, efficient, model-agnostic framework that taps directly into the pretrained knowledge of VLMs to produce scene graphs with zero additional learning. Casting SGG as a zero-shot structured-reasoning problem, our method combines multimodal prompting, embedding alignment, and a lightweight pair-refinement strategy, enabling inference over unseen object vocabularies and relation sets. To assess this setting, we formalize an Open-World evaluation protocol that measures performance when no SGG-specific data have been observed either in terms of objects and relations. Experiments on Visual Genome, Open Images V6, and the Panoptic Scene Graph (PSG) dataset demonstrate the capacity of pretrained VLMs to perform relational understanding without task-level training.

View on arXiv
@article{dutta2025_2506.08189,
  title={ Open World Scene Graph Generation using Vision Language Models },
  author={ Amartya Dutta and Kazi Sajeed Mehrab and Medha Sawhney and Abhilash Neog and Mridul Khurana and Sepideh Fatemi and Aanish Pradhan and M. Maruf and Ismini Lourentzou and Arka Daw and Anuj Karpatne },
  journal={arXiv preprint arXiv:2506.08189},
  year={ 2025 }
}
Main:6 Pages
11 Figures
Bibliography:3 Pages
4 Tables
Appendix:13 Pages
Comments on this paper