Language-Enhanced Representation Learning for Single-Cell Transcriptomics

Single-cell RNA sequencing (scRNA-seq) offers detailed insights into cellular heterogeneity. Recent advancements leverage single-cell large language models (scLLMs) for effective representation learning. These models focus exclusively on transcriptomic data, neglecting complementary biological knowledge from textual descriptions. To overcome this limitation, we propose scMMGPT, a novel multimodal framework designed for language-enhanced representation learning in single-cell transcriptomics. Unlike existing methods, scMMGPT employs robust cell representation extraction, preserving quantitative gene expression data, and introduces an innovative two-stage pre-training strategy combining discriminative precision with generative flexibility. Extensive experiments demonstrate that scMMGPT significantly outperforms unimodal and multimodal baselines across key downstream tasks, including cell annotation and clustering, and exhibits superior generalization in out-of-distribution scenarios.
View on arXiv@article{shi2025_2503.09427, title={ Language-Enhanced Representation Learning for Single-Cell Transcriptomics }, author={ Yaorui Shi and Jiaqi Yang and Changhao Nai and Sihang Li and Junfeng Fang and Xiang Wang and Zhiyuan Liu and Yang Zhang }, journal={arXiv preprint arXiv:2503.09427}, year={ 2025 } }