ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14265
10
0

Exploring Non-contrastive Self-supervised Representation Learning for Image-based Profiling

17 June 2025
Siran Dai
Qianqian Xu
Peisong Wen
Yang Liu
Qingming Huang
ArXiv (abs)PDFHTML
Main:4 Pages
Bibliography:2 Pages
1 Tables
Appendix:1 Pages
Abstract

Image-based cell profiling aims to create informative representations of cell images. This technique is critical in drug discovery and has greatly advanced with recent improvements in computer vision. Inspired by recent developments in non-contrastive Self-Supervised Learning (SSL), this paper provides an initial exploration into training a generalizable feature extractor for cell images using such methods. However, there are two major challenges: 1) There is a large difference between the distributions of cell images and natural images, causing the view-generation process in existing SSL methods to fail; and 2) Unlike typical scenarios where each representation is based on a single image, cell profiling often involves multiple input images, making it difficult to effectively combine all available information. To overcome these challenges, we propose SSLProfiler, a non-contrastive SSL framework specifically designed for cell profiling. We introduce specialized data augmentation and representation post-processing methods tailored to cell images, which effectively address the issues mentioned above and result in a robust feature extractor. With these improvements, SSLProfiler won the Cell Line Transferability challenge at CVPR 2025.

View on arXiv
@article{dai2025_2506.14265,
  title={ Exploring Non-contrastive Self-supervised Representation Learning for Image-based Profiling },
  author={ Siran Dai and Qianqian Xu and Peisong Wen and Yang Liu and Qingming Huang },
  journal={arXiv preprint arXiv:2506.14265},
  year={ 2025 }
}
Comments on this paper