ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01487
49
0

FDSG: Forecasting Dynamic Scene Graphs

2 June 2025
Yi Yang
Yuren Cong
Hao Cheng
Bodo Rosenhahn
Michael Ying Yang
    AI4TS
ArXiv (abs)PDFHTML
Main:8 Pages
9 Figures
Bibliography:2 Pages
16 Tables
Appendix:11 Pages
Abstract

Dynamic scene graph generation extends scene graph generation from images to videos by modeling entity relationships and their temporal evolution. However, existing methods either generate scene graphs from observed frames without explicitly modeling temporal dynamics, or predict only relationships while assuming static entity labels and locations. These limitations hinder effective extrapolation of both entity and relationship dynamics, restricting video scene understanding. We propose Forecasting Dynamic Scene Graphs (FDSG), a novel framework that predicts future entity labels, bounding boxes, and relationships, for unobserved frames, while also generating scene graphs for observed frames. Our scene graph forecast module leverages query decomposition and neural stochastic differential equations to model entity and relationship dynamics. A temporal aggregation module further refines predictions by integrating forecasted and observed information via cross-attention. To benchmark FDSG, we introduce Scene Graph Forecasting, a new task for full future scene graph prediction. Experiments on Action Genome show that FDSG outperforms state-of-the-art methods on dynamic scene graph generation, scene graph anticipation, and scene graph forecasting. Codes will be released upon publication.

View on arXiv
@article{yang2025_2506.01487,
  title={ FDSG: Forecasting Dynamic Scene Graphs },
  author={ Yi Yang and Yuren Cong and Hao Cheng and Bodo Rosenhahn and Michael Ying Yang },
  journal={arXiv preprint arXiv:2506.01487},
  year={ 2025 }
}
Comments on this paper