ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20361
63
3

OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection

27 February 2025
Shuming Liu
Chen Zhao
Fatimah Zohra
Mattia Soldan
Alejandro Pardo
Mengmeng Xu
Lama Alssum
Merey Ramazanova
Juan Carlos León Alcázar
A. Cioppa
Silvio Giancola
Carlos Hinojosa
Bernard Ghanem
ArXivPDFHTML
Abstract

Temporal action detection (TAD) is a fundamental video understanding task that aims to identify human actions and localize their temporal boundaries in videos. Although this field has achieved remarkable progress in recent years, further progress and real-world applications are impeded by the absence of a standardized framework. Currently, different methods are compared under different implementation settings, evaluation protocols, etc., making it difficult to assess the real effectiveness of a specific technique. To address this issue, we propose \textbf{OpenTAD}, a unified TAD framework consolidating 16 different TAD methods and 9 standard datasets into a modular codebase. In OpenTAD, minimal effort is required to replace one module with a different design, train a feature-based TAD model in end-to-end mode, or switch between the two. OpenTAD also facilitates straightforward benchmarking across various datasets and enables fair and in-depth comparisons among different methods. With OpenTAD, we comprehensively study how innovations in different network components affect detection performance and identify the most effective design choices through extensive experiments. This study has led to a new state-of-the-art TAD method built upon existing techniques for each component. We have made our code and models available atthis https URL.

View on arXiv
@article{liu2025_2502.20361,
  title={ OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection },
  author={ Shuming Liu and Chen Zhao and Fatimah Zohra and Mattia Soldan and Alejandro Pardo and Mengmeng Xu and Lama Alssum and Merey Ramazanova and Juan León Alcázar and Anthony Cioppa and Silvio Giancola and Carlos Hinojosa and Bernard Ghanem },
  journal={arXiv preprint arXiv:2502.20361},
  year={ 2025 }
}
Comments on this paper