ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07399
43
0

Keeping Representation Similarity in Finetuning for Medical Image Analysis

10 March 2025
Wenqiang Zu
Shenghao Xie
Hao Chen
Yiming Liang
Lei Ma
    MedIm
    OOD
ArXivPDFHTML
Abstract

Foundation models pretrained on large-scale natural images have been widely used to adapt to medical image analysis through finetuning. This is largely attributed to pretrained representations capturing universal, robust, and generalizable features, which can be reutilized by downstream tasks. However, these representations are later found to gradually vanish during finetuning, accompanied by a degradation of foundation model's original abilities, e.g., generalizability. In this paper, we argue that pretrained representations can be well preserved while still effectively adapting to downstream tasks. We study this by proposing a new finetuning method RepSim, which minimizes the distance between pretrained and finetuned representations via constraining learnable orthogonal manifold based on similarity invariance. Compared to standard finetuning methods, e.g., full finetuning, our method improves representation similarity by over 30% while maintaining competitive accuracy, and reduces sharpness by 42% across five medical image classification datasets. The code will be released.

View on arXiv
@article{zu2025_2503.07399,
  title={ Keeping Representation Similarity in Finetuning for Medical Image Analysis },
  author={ Wenqiang Zu and Shenghao Xie and Hao Chen and Yiming Liang and Lei Ma },
  journal={arXiv preprint arXiv:2503.07399},
  year={ 2025 }
}
Comments on this paper