ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09904
29
0

LiteTracker: Leveraging Temporal Causality for Accurate Low-latency Tissue Tracking

14 April 2025
Mert Asim Karaoglu
Wenbo Ji
Ahmed Abbas
Nassir Navab
Benjamin Busam
A. Ladikos
ArXivPDFHTML
Abstract

Tissue tracking plays a critical role in various surgical navigation and extended reality (XR) applications. While current methods trained on large synthetic datasets achieve high tracking accuracy and generalize well to endoscopic scenes, their runtime performances fail to meet the low-latency requirements necessary for real-time surgical applications. To address this limitation, we propose LiteTracker, a low-latency method for tissue tracking in endoscopic video streams. LiteTracker builds on a state-of-the-art long-term point tracking method, and introduces a set of training-free runtime optimizations. These optimizations enable online, frame-by-frame tracking by leveraging a temporal memory buffer for efficient feature reuse and utilizing prior motion for accurate track initialization. LiteTracker demonstrates significant runtime improvements being around 7x faster than its predecessor and 2x than the state-of-the-art. Beyond its primary focus on efficiency, LiteTracker delivers high-accuracy tracking and occlusion prediction, performing competitively on both the STIR and SuPer datasets. We believe LiteTracker is an important step toward low-latency tissue tracking for real-time surgical applications in the operating room.

View on arXiv
@article{karaoglu2025_2504.09904,
  title={ LiteTracker: Leveraging Temporal Causality for Accurate Low-latency Tissue Tracking },
  author={ Mert Asim Karaoglu and Wenbo Ji and Ahmed Abbas and Nassir Navab and Benjamin Busam and Alexander Ladikos },
  journal={arXiv preprint arXiv:2504.09904},
  year={ 2025 }
}
Comments on this paper