ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.07615
15
10

UOD: Universal One-shot Detection of Anatomical Landmarks

13 June 2023
Heqin Zhu
Quan Quan
Qingsong Yao
Zaiyi Liu
S.Kevin Zhou
    ViT
ArXivPDFHTML
Abstract

One-shot medical landmark detection gains much attention and achieves great success for its label-efficient training process. However, existing one-shot learning methods are highly specialized in a single domain and suffer domain preference heavily in the situation of multi-domain unlabeled data. Moreover, one-shot learning is not robust that it faces performance drop when annotating a sub-optimal image. To tackle these issues, we resort to developing a domain-adaptive one-shot landmark detection framework for handling multi-domain medical images, named Universal One-shot Detection (UOD). UOD consists of two stages and two corresponding universal models which are designed as combinations of domain-specific modules and domain-shared modules. In the first stage, a domain-adaptive convolution model is self-supervised learned to generate pseudo landmark labels. In the second stage, we design a domain-adaptive transformer to eliminate domain preference and build the global context for multi-domain data. Even though only one annotated sample from each domain is available for training, the domain-shared modules help UOD aggregate all one-shot samples to detect more robust and accurate landmarks. We investigated both qualitatively and quantitatively the proposed UOD on three widely-used public X-ray datasets in different anatomical domains (i.e., head, hand, chest) and obtained state-of-the-art performances in each domain. The code is available atthis https URL.

View on arXiv
@article{zhu2025_2306.07615,
  title={ UOD: Universal One-shot Detection of Anatomical Landmarks },
  author={ Heqin Zhu and Quan Quan and Qingsong Yao and Zaiyi Liu and S. Kevin Zhou },
  journal={arXiv preprint arXiv:2306.07615},
  year={ 2025 }
}
Comments on this paper