ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16321
10
0

Efficient Motion Prompt Learning for Robust Visual Tracking

22 May 2025
Jie Zhao
Xin Chen
Yongsheng Yuan
Michael Felsberg
Dong Wang
Huchuan Lu
ArXivPDFHTML
Abstract

Due to the challenges of processing temporal information, most trackers depend solely on visual discriminability and overlook the unique temporal coherence of video data. In this paper, we propose a lightweight and plug-and-play motion prompt tracking method. It can be easily integrated into existing vision-based trackers to build a joint tracking framework leveraging both motion and vision cues, thereby achieving robust tracking through efficient prompt learning. A motion encoder with three different positional encodings is proposed to encode the long-term motion trajectory into the visual embedding space, while a fusion decoder and an adaptive weight mechanism are designed to dynamically fuse visual and motion features. We integrate our motion module into three different trackers with five models in total. Experiments on seven challenging tracking benchmarks demonstrate that the proposed motion module significantly improves the robustness of vision-based trackers, with minimal training costs and negligible speed sacrifice. Code is available atthis https URL.

View on arXiv
@article{zhao2025_2505.16321,
  title={ Efficient Motion Prompt Learning for Robust Visual Tracking },
  author={ Jie Zhao and Xin Chen and Yongsheng Yuan and Michael Felsberg and Dong Wang and Huchuan Lu },
  journal={arXiv preprint arXiv:2505.16321},
  year={ 2025 }
}
Comments on this paper