42
19

Behavior Discovery and Alignment of Articulated Object Classes from Unstructured Video

Abstract

Internet videos provide a wealth of data that could be used to learn the appearance or expected behaviors of many object classes. However, most supervised methods cannot exploit this data directly, as they require a large amount of time-consuming manual annotations. As a step towards solving this problem, we propose an automatic system for organizing the content of a collection of videos of an articulated object class (e.g. tiger, horse). By exploiting the recurring motion patterns of the class across videos, our system: 1) identifies its characteristic behaviors; and 2) recovers pixel-to-pixel alignments across different instances. The behavior discovery stage generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, clustered by type. It relies on our novel motion representation for articulated motion based on the displacement of ordered pairs of trajectories (PoTs). The alignment stage aligns hundreds of instances of the class to a great accuracy despite considerable appearance variations (e.g. an adult tiger and a cub). It uses a flexible Thin Plate Spline deformation model that can vary through time. We carefully evaluate each step of our system on a new, fully annotated dataset. On behavior discovery, we outperform the state-of-the-art Improved DTF descriptor. On spatial alignment, we outperform the popular SIFT Flow algorithm.

View on arXiv
Comments on this paper