7
0

Just Dance with ππ! A Poly-modal Inductor for Weakly-supervised Video Anomaly Detection

Abstract

Weakly-supervised methods for video anomaly detection (VAD) are conventionally based merely on RGB spatio-temporal features, which continues to limit their reliability in real-world scenarios. This is due to the fact that RGB-features are not sufficiently distinctive in setting apart categories such as shoplifting from visually similar events. Therefore, towards robust complex real-world VAD, it is essential to augment RGB spatio-temporal features by additional modalities. Motivated by this, we introduce the Poly-modal Induced framework for VAD: "PI-VAD", a novel approach that augments RGB representations by five additional modalities. Specifically, the modalities include sensitivity to fine-grained motion (Pose), three dimensional scene and entity representation (Depth), surrounding objects (Panoptic masks), global motion (optical flow), as well as language cues (VLM). Each modality represents an axis of a polygon, streamlined to add salient cues to RGB. PI-VAD includes two plug-in modules, namely Pseudo-modality Generation module and Cross Modal Induction module, which generate modality-specific prototypical representation and, thereby, induce multi-modal information into RGB cues. These modules operate by performing anomaly-aware auxiliary tasks and necessitate five modality backbones -- only during training. Notably, PI-VAD achieves state-of-the-art accuracy on three prominent VAD datasets encompassing real-world scenarios, without requiring the computational overhead of five modality backbones at inference.

View on arXiv
@article{majhi2025_2505.13123,
  title={ Just Dance with $π$! A Poly-modal Inductor for Weakly-supervised Video Anomaly Detection },
  author={ Snehashis Majhi and Giacomo DÁmicantonio and Antitza Dantcheva and Quan Kong and Lorenzo Garattoni and Gianpiero Francesca and Egor Bondarev and Francois Bremond },
  journal={arXiv preprint arXiv:2505.13123},
  year={ 2025 }
}
Comments on this paper