ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.14356
41
0

Benchmarking community drug response prediction models: datasets, models, tools, and metrics for cross-dataset generalization analysis

18 March 2025
A. Partin
Priyanka Vasanthakumari
Oleksandr Narykov
Andreas Wilke
Natasha Koussa
Sara Jones
Yitan Zhu
Jamie Overbeek
Rajeev Jain
Gayara Demini Fernando
Cesar Sanchez-Villalobos
Cristina Garcia-Cardona
Jamaludin Mohd-Yusof
Nicholas Chia
Justin M. Wozniak
Souparno Ghosh
R. Pal
Thomas Brettin
M. Ryan Weil
Rick L. Stevens
    OOD
ArXivPDFHTML
Abstract

Deep learning (DL) and machine learning (ML) models have shown promise in drug response prediction (DRP), yet their ability to generalize across datasets remains an open question, raising concerns about their real-world applicability. Due to the lack of standardized benchmarking approaches, model evaluations and comparisons often rely on inconsistent datasets and evaluation criteria, making it difficult to assess true predictive capabilities. In this work, we introduce a benchmarking framework for evaluating cross-dataset prediction generalization in DRP models. Our framework incorporates five publicly available drug screening datasets, six standardized DRP models, and a scalable workflow for systematic evaluation. To assess model generalization, we introduce a set of evaluation metrics that quantify both absolute performance (e.g., predictive accuracy across datasets) and relative performance (e.g., performance drop compared to within-dataset results), enabling a more comprehensive assessment of model transferability. Our results reveal substantial performance drops when models are tested on unseen datasets, underscoring the importance of rigorous generalization assessments. While several models demonstrate relatively strong cross-dataset generalization, no single model consistently outperforms across all datasets. Furthermore, we identify CTRPv2 as the most effective source dataset for training, yielding higher generalization scores across target datasets. By sharing this standardized evaluation framework with the community, our study aims to establish a rigorous foundation for model comparison, and accelerate the development of robust DRP models for real-world applications.

View on arXiv
@article{partin2025_2503.14356,
  title={ Benchmarking community drug response prediction models: datasets, models, tools, and metrics for cross-dataset generalization analysis },
  author={ Alexander Partin and Priyanka Vasanthakumari and Oleksandr Narykov and Andreas Wilke and Natasha Koussa and Sara E. Jones and Yitan Zhu and Jamie C. Overbeek and Rajeev Jain and Gayara Demini Fernando and Cesar Sanchez-Villalobos and Cristina Garcia-Cardona and Jamaludin Mohd-Yusof and Nicholas Chia and Justin M. Wozniak and Souparno Ghosh and Ranadip Pal and Thomas S. Brettin and M. Ryan Weil and Rick L. Stevens },
  journal={arXiv preprint arXiv:2503.14356},
  year={ 2025 }
}
Comments on this paper