ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.07674
24
0

Koos Classification of Vestibular Schwannoma via Image Translation-Based Unsupervised Cross-Modality Domain Adaptation

14 March 2023
Tao Yang
Lisheng Wang
ArXivPDFHTML
Abstract

The Koos grading scale is a classification system for vestibular schwannoma (VS) used to characterize the tumor and its effects on adjacent brain structures. The Koos classification captures many of the characteristics of treatment deci-sions and is often used to determine treatment plans. Although both contrast-enhanced T1 (ceT1) scanning and high-resolution T2 (hrT2) scanning can be used for Koos Classification, hrT2 scanning is gaining interest because of its higher safety and cost-effectiveness. However, in the absence of annotations for hrT2 scans, deep learning methods often inevitably suffer from performance deg-radation due to unsupervised learning. If ceT1 scans and their annotations can be used for unsupervised learning of hrT2 scans, the performance of Koos classifi-cation using unlabeled hrT2 scans will be greatly improved. In this regard, we propose an unsupervised cross-modality domain adaptation method based on im-age translation by transforming annotated ceT1 scans into hrT2 modality and us-ing their annotations to achieve supervised learning of hrT2 modality. Then, the VS and 7 adjacent brain structures related to Koos classification in hrT2 scans were segmented. Finally, handcrafted features are extracted from the segmenta-tion results, and Koos grade is classified using a random forest classifier. The proposed method received rank 1 on the Koos classification task of the Cross-Modality Domain Adaptation (crossMoDA 2022) challenge, with Macro-Averaged Mean Absolute Error (MA-MAE) of 0.2148 for the validation set and 0.26 for the test set.

View on arXiv
Comments on this paper