18
0

scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data

Main:14 Pages
13 Figures
Bibliography:1 Pages
12 Tables
Appendix:12 Pages
Abstract

Self-supervised learning (SSL) has proven to be a powerful approach for extracting biologically meaningful representations from single-cell data. To advance our understanding of SSL methods applied to single-cell data, we present scSSL-Bench, a comprehensive benchmark that evaluates nineteen SSL methods. Our evaluation spans nine datasets and focuses on three common downstream tasks: batch correction, cell type annotation, and missing modality prediction. Furthermore, we systematically assess various data augmentation strategies. Our analysis reveals task-specific trade-offs: the specialized single-cell frameworks, scVI, CLAIRE, and the finetuned scGPT excel at uni-modal batch correction, while generic SSL methods, such as VICReg and SimCLR, demonstrate superior performance in cell typing and multi-modal data integration. Random masking emerges as the most effective augmentation technique across all tasks, surpassing domain-specific augmentations. Notably, our results indicate the need for a specialized single-cell multi-modal data integration framework. scSSL-Bench provides a standardized evaluation platform and concrete recommendations for applying SSL to single-cell analysis, advancing the convergence of deep learning and single-cell genomics.

View on arXiv
@article{ovcharenko2025_2506.10031,
  title={ scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data },
  author={ Olga Ovcharenko and Florian Barkmann and Philip Toma and Imant Daunhawer and Julia Vogt and Sebastian Schelter and Valentina Boeva },
  journal={arXiv preprint arXiv:2506.10031},
  year={ 2025 }
}
Comments on this paper