86
0

Ego-VPA: Egocentric Video Understanding with Parameter-efficient Adaptation

Abstract

Video understanding typically requires fine-tuning the large backbone when adapting to new domains. In this paper, we leverage the egocentric video foundation models (Ego-VFMs) based on video-language pre-training and propose a parameter-efficient adaptation for egocentric video tasks, namely Ego-VPA. It employs a local sparse approximation for each video frame/text feature using the basis prompts, and the selected basis prompts are used to synthesize video/text prompts. Since the basis prompts are shared across frames and modalities, it models context fusion and cross-modal transfer in an efficient fashion. Experiments show that Ego-VPA excels in lightweight adaptation (with only 0.84% learnable parameters), largely improving over baselines and reaching the performance of full fine-tuning.

View on arXiv
@article{wu2025_2407.19520,
  title={ Ego-VPA: Egocentric Video Understanding with Parameter-efficient Adaptation },
  author={ Tz-Ying Wu and Kyle Min and Subarna Tripathi and Nuno Vasconcelos },
  journal={arXiv preprint arXiv:2407.19520},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.