16
0
v1v2 (latest)

Geometric Visual Fusion Graph Neural Networks for Multi-Person Human-Object Interaction Recognition in Videos

Main:30 Pages
13 Figures
Bibliography:10 Pages
11 Tables
Abstract

Human-Object Interaction (HOI) recognition in videos requires understanding both visual patterns and geometric relationships as they evolve over time. Visual and geometric features offer complementary strengths. Visual features capture appearance context, while geometric features provide structural patterns. Effectively fusing these multimodal features without compromising their unique characteristics remains challenging. We observe that establishing robust, entity-specific representations before modeling interactions helps preserve the strengths of each modality. Therefore, we hypothesize that a bottom-up approach is crucial for effective multimodal fusion. Following this insight, we propose the Geometric Visual Fusion Graph Neural Network (GeoVis-GNN), which uses dual-attention feature fusion combined with interdependent entity graph learning. It progressively builds from entity-specific representations toward high-level interaction understanding. To advance HOI recognition to real-world scenarios, we introduce the Concurrent Partial Interaction Dataset (MPHOI-120). It captures dynamic multi-person interactions involving concurrent actions and partial engagement. This dataset helps address challenges like complex human-object dynamics and mutual occlusions. Extensive experiments demonstrate the effectiveness of our method across various HOI scenarios. These scenarios include two-person interactions, single-person activities, bimanual manipulations, and complex concurrent partial interactions. Our method achieves state-of-the-art performance.

View on arXiv
@article{qiao2025_2506.03440,
  title={ Geometric Visual Fusion Graph Neural Networks for Multi-Person Human-Object Interaction Recognition in Videos },
  author={ Tanqiu Qiao and Ruochen Li and Frederick W. B. Li and Yoshiki Kubotani and Shigeo Morishima and Hubert P. H. Shum },
  journal={arXiv preprint arXiv:2506.03440},
  year={ 2025 }
}
Comments on this paper