9
0

Comparison of ConvNeXt and Vision-Language Models for Breast Density Assessment in Screening Mammography

Main:5 Pages
4 Figures
Bibliography:1 Pages
1 Tables
Abstract

Mammographic breast density classification is essential for cancer risk assessment but remains challenging due to subjective interpretation and inter-observer variability. This study compares multimodal and CNN-based methods for automated classification using the BI-RADS system, evaluating BioMedCLIP and ConvNeXt across three learning scenarios: zero-shot classification, linear probing with textual descriptions, and fine-tuning with numerical labels. Results show that zero-shot classification achieved modest performance, while the fine-tuned ConvNeXt model outperformed the BioMedCLIP linear probe. Although linear probing demonstrated potential with pretrained embeddings, it was less effective than full fine-tuning. These findings suggest that despite the promise of multimodal learning, CNN-based models with end-to-end fine-tuning provide stronger performance for specialized medical imaging. The study underscores the need for more detailed textual representations and domain-specific adaptations in future radiology applications.

View on arXiv
@article{molina-román2025_2506.13964,
  title={ Comparison of ConvNeXt and Vision-Language Models for Breast Density Assessment in Screening Mammography },
  author={ Yusdivia Molina-Román and David Gómez-Ortiz and Ernestina Menasalvas-Ruiz and José Gerardo Tamez-Peña and Alejandro Santos-Díaz },
  journal={arXiv preprint arXiv:2506.13964},
  year={ 2025 }
}
Comments on this paper