ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03662
100
0
v1v2v3 (latest)

Zero-Shot Temporal Interaction Localization for Egocentric Videos

4 June 2025
Erhang Zhang
Junyi Ma
Yin-Dong Zheng
Yixuan Zhou
Hesheng Wang
ArXiv (abs)PDFHTML
Main:7 Pages
7 Figures
Bibliography:1 Pages
5 Tables
Abstract

Locating human-object interaction (HOI) actions within video serves as the foundation for multiple downstream tasks, such as human behavior analysis and human-robot skill transfer. Current temporal action localization methods typically rely on annotated action and object categories of interactions for optimization, which leads to domain bias and low deployment efficiency. Although some recent works have achieved zero-shot temporal action localization (ZS-TAL) with large vision-language models (VLMs), their coarse-grained estimations and open-loop pipelines hinder further performance improvements for temporal interaction localization (TIL). To address these issues, we propose a novel zero-shot TIL approach dubbed EgoLoc to locate the timings of grasp actions for human-object interaction in egocentric videos. EgoLoc introduces a self-adaptive sampling strategy to generate reasonable visual prompts for VLM reasoning. By absorbing both 2D and 3D observations, it directly samples high-quality initial guesses around the possible contact/separation timestamps of HOI according to 3D hand velocities, leading to high inference accuracy and efficiency. In addition, EgoLoc generates closed-loop feedback from visual and dynamic cues to further refine the localization results. Comprehensive experiments on the publicly available dataset and our newly proposed benchmark demonstrate that EgoLoc achieves better temporal interaction localization for egocentric videos compared to state-of-the-art baselines. We will release our code and relevant data as open-source at this https URL.

View on arXiv
@article{zhang2025_2506.03662,
  title={ Zero-Shot Temporal Interaction Localization for Egocentric Videos },
  author={ Erhang Zhang and Junyi Ma and Yin-Dong Zheng and Yixuan Zhou and Hesheng Wang },
  journal={arXiv preprint arXiv:2506.03662},
  year={ 2025 }
}
Comments on this paper