In recent years, visual saliency estimation in images has attracted much attention in the computer vision community. However, predicting saliency in videos has received rela- tively little attention. Inspired by the recent success of deep convolutional neural networks based static saliency mod- els, in this work, we study two different two-stream convo- lutional networks for dynamic saliency prediction. To im- prove the generalization capability of our models, we also introduce a novel, empirically grounded data augmenta- tion technique for this task. We test our models on DIEM dataset and report superior results against the existing mod- els. Moreover, we perform transfer learning experiments on SALICON, a recently proposed static saliency dataset, by finetuning our models on the optical flows estimated from static images. Our experiments show that taking motion into account in this way can be helpful for static saliency estimation.
View on arXiv