Learning to track for spatio-temporal action localization

We propose an effective approach for action localiza-tion, both in the spatial and temporal domains, in realistic videos. The approach starts from detecting proposals at frame-level, and proceeds to scoring them using a combination of static and motion state-of-the-art features extracted from CNNs. We then track a selection of proposals throughout the video, using a tracking-by-detection approach that leverages a combination of instance-level and class-specific learned detectors. The tracks are scored using a spatio-temporal motion histogram (STMH), a novel descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach. We present experimental results on the UCF-Sports and J-HMDB action localization datasets, where our approach outperforms the state of the art with a margin of 15% and 7% respectively in mAP. Furthermore, we present the first experimental results on the challenging UCF-101 localiza-tion dataset with 24 classes, where we also obtain a promising performance.
View on arXiv