Recurrent Mixture Density Network for Spatiotemporal Visual Attention

The high-dimensional and redundant nature of video have pushed researchers to seek the design of attentional models that can dynamically focus computations on the spatiotemporal volumes that are most relevant. Specifically, these models have been used to eliminate or down-weight background pixels that are not important for the task at hand. In order to deal with this problem, we propose an attentional model that learns where to look in a video directly from human fixation data. The proposed model leverages deep 3D convolutional features to represent clip segments in videos. This clip-level representation is aggregated over time by a long short-term memory network that connects into a mixture density network model of the likely positions of fixations in each frame. The resulting model is trained end to end using backpropagation. Our experiments show state-of-the-art performance on saliency prediction for videos. Experiments on Hollywood2 and UCF101 also show that the saliency can be used to improve classification accuracy on action recognition tasks.
View on arXiv