ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05427
20
0

MTPNet: Multi-Grained Target Perception for Unified Activity Cliff Prediction

5 June 2025
Zishan Shu
Yufan Deng
Hongyu Zhang
Zhiwei Nie
Jie Chen
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:2 Pages
3 Tables
Abstract

Activity cliff prediction is a critical task in drug discovery and material design. Existing computational methods are limited to handling single binding targets, which restricts the applicability of these prediction models. In this paper, we present the Multi-Grained Target Perception network (MTPNet) to incorporate the prior knowledge of interactions between the molecules and their target proteins. Specifically, MTPNet is a unified framework for activity cliff prediction, which consists of two components: Macro-level Target Semantic (MTS) guidance and Micro-level Pocket Semantic (MPS) guidance. By this way, MTPNet dynamically optimizes molecular representations through multi-grained protein semantic conditions. To our knowledge, it is the first time to employ the receptor proteins as guiding information to effectively capture critical interaction details. Extensive experiments on 30 representative activity cliff datasets demonstrate that MTPNet significantly outperforms previous approaches, achieving an average RMSE improvement of 18.95% on top of several mainstream GNN architectures. Overall, MTPNet internalizes interaction patterns through conditional deep learning to achieve unified predictions of activity cliffs, helping to accelerate compound optimization and design. Codes are available at:this https URL.

View on arXiv
@article{shu2025_2506.05427,
  title={ MTPNet: Multi-Grained Target Perception for Unified Activity Cliff Prediction },
  author={ Zishan Shu and Yufan Deng and Hongyu Zhang and Zhiwei Nie and Jie Chen },
  journal={arXiv preprint arXiv:2506.05427},
  year={ 2025 }
}
Comments on this paper