Visual-Semantic Graph Attention Networks for Human-Object Interaction
Detection
In scene understanding, machines benefit from not only detecting individual scene instances but also from learning their possible interactions. Human-Object Interaction (HOI) Detection infers the action predicate on a <subject,predicate,object> triplet. Contextual information has been found critical in inferring interactions. However, most works only use local features from a single subject-object pair for inference. Few works have studied the disambiguating contribution of subsidiary relations made available via graph networks and the impact attention mechanisms have in inference. Similarly, few have learned to effectively leverage visual cues along with the intrinsic semantic regularities contained in HOIs. We contribute a dual-graph attention network that effectively aggregates contextual visual, spatial, and semantic information dynamically from primary subject-object relations as well as subsidiary relations through attention mechanisms for strong disambiguating power. The network learns to use both primary and subsidiary relations to improve inference in challenging settings: encouraging the right interpretations and discouraging incorrect ones. We call our model: Visual-Semantic Graph Attention Networks (VS-GATs). We surpass state-of-the-art HOI detection mAPs in the challenging HICO-DET dataset, including in long-tail cases that are harder to interpret. Code, video, and supplementary information are available at http://www.juanrojas.net/VSGAT.
View on arXiv