ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.12174
24
120

Video Summarization by Learning from Unpaired Data

30 May 2018
Mrigank Rochan
Yang Wang
ArXivPDFHTML
Abstract

We consider the problem of video summarization. Given an input raw video, the goal is to select a small subset of key frames from the input video to create a shorter summary video that best describes the content of the original video. Most of the current state-of-the-art video summarization approaches use supervised learning and require labeled training data. Each training instance consists of a raw input video and its ground truth summary video curated by human annotators. However, it is very expensive and difficult to create such labeled training examples. To address this limitation, we propose a novel formulation to learn video summarization from unpaired data. We present an approach that learns to generate optimal video summaries using a set of raw videos (VVV) and a set of summary videos (SSS), where there exists no correspondence between VVV and SSS. We argue that this type of data is much easier to collect. Our model aims to learn a mapping function F:V→SF : V \rightarrow SF:V→S such that the distribution of resultant summary videos from F(V)F(V)F(V) is similar to the distribution of SSS with the help of an adversarial objective. In addition, we enforce a diversity constraint on F(V)F(V)F(V) to ensure that the generated video summaries are visually diverse. Experimental results on two benchmark datasets indicate that our proposed approach significantly outperforms other alternative methods.

View on arXiv
Comments on this paper