ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21317
15
0

A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features

27 May 2025
Ihab Bendidi
Yassir El Mesbahi
Alisandra K. Denton
Karush Suri
Kian Kenyon-Dean
Auguste Genovesio
Emmanuel Noutahi
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:4 Pages
3 Tables
Appendix:6 Pages
Abstract

Understanding cellular responses to stimuli is crucial for biological discovery and drug development. Transcriptomics provides interpretable, gene-level insights, while microscopy imaging offers rich predictive features but is harder to interpret. Weakly paired datasets, where samples share biological states, enable multimodal learning but are scarce, limiting their utility for training and multimodal inference. We propose a framework to enhance transcriptomics by distilling knowledge from microscopy images. Using weakly paired data, our method aligns and binds modalities, enriching gene expression representations with morphological information. To address data scarcity, we introduce (1) Semi-Clipped, an adaptation of CLIP for cross-modal distillation using pretrained foundation models, achieving state-of-the-art results, and (2) PEA (Perturbation Embedding Augmentation), a novel augmentation technique that enhances transcriptomics data while preserving inherent biological information. These strategies improve the predictive power and retain the interpretability of transcriptomics, enabling rich unimodal representations for complex biological tasks.

View on arXiv
@article{bendidi2025_2505.21317,
  title={ A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features },
  author={ Ihab Bendidi and Yassir El Mesbahi and Alisandra K. Denton and Karush Suri and Kian Kenyon-Dean and Auguste Genovesio and Emmanuel Noutahi },
  journal={arXiv preprint arXiv:2505.21317},
  year={ 2025 }
}
Comments on this paper