35
0

ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization

Main:15 Pages
9 Figures
Bibliography:3 Pages
2 Tables
Abstract

Temporal Action Localization (TAL) aims to detect the start and end timestamps of actions in a video. However, the training of TAL models requires a substantial amount of manually annotated data. Data programming is an efficient method to create training labels with a series of human-defined labeling functions. However, its application in TAL faces difficulties of defining complex actions in the context of temporal video frames. In this paper, we propose ProTAL, a drag-and-link video programming framework for TAL. ProTAL enables users to define \textbf{key events} by dragging nodes representing body parts and objects and linking them to constrain the relations (direction, distance, etc.). These definitions are used to generate action labels for large-scale unlabelled videos. A semi-supervised method is then employed to train TAL models with such labels. We demonstrate the effectiveness of ProTAL through a usage scenario and a user study, providing insights into designing video programming framework.

View on arXiv
@article{he2025_2505.17555,
  title={ ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization },
  author={ Yuchen He and Jianbing Lv and Liqi Cheng and Lingyu Meng and Dazhen Deng and Yingcai Wu },
  journal={arXiv preprint arXiv:2505.17555},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.