ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07204
142
0

Endo-FASt3r: Endoscopic Foundation model Adaptation for Structure from motion

10 March 2025
Mona Sheikh Zeinoddin
Mobarakol Islam
Zafer Tandogdu
Greg Shaw
Mathew J. Clarkson
E. Mazomenos
Danail Stoyanov
ArXivPDFHTML
Abstract

Accurate depth and camera pose estimation is essential for achieving high-quality 3D visualisations in robotic-assisted surgery. Despite recent advancements in foundation model adaptation to monocular depth estimation of endoscopic scenes via self-supervised learning (SSL), no prior work has explored their use for pose estimation. These methods rely on low rank-based adaptation approaches, which constrain model updates to a low-rank space. We propose Endo-FASt3r, the first monocular SSL depth and pose estimation framework that uses foundation models for both tasks. We extend the Reloc3r relative pose estimation foundation model by designing Reloc3rX, introducing modifications necessary for convergence in SSL. We also present DoMoRA, a novel adaptation technique that enables higher-rank updates and faster convergence. Experiments on the SCARED dataset show that Endo-FASt3r achieves a substantial 10%10\%10% improvement in pose estimation and a 2%2\%2% improvement in depth estimation over prior work. Similar performance gains on the Hamlyn and StereoMIS datasets reinforce the generalisability of Endo-FASt3r across different datasets.

View on arXiv
@article{zeinoddin2025_2503.07204,
  title={ Endo-FASt3r: Endoscopic Foundation model Adaptation for Structure from motion },
  author={ Mona Sheikh Zeinoddin and Mobarakol Islam and Zafer Tandogdu and Greg Shaw and Mathew J. Clarkson and Evangelos Mazomenos and Danail Stoyanov },
  journal={arXiv preprint arXiv:2503.07204},
  year={ 2025 }
}
Comments on this paper