ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11421
77
0
v1v2 (latest)

Deep Learning Model Acceleration and Optimization Strategies for Real-Time Recommendation Systems

13 June 2025
Junli Shao
Jing Dong
Dingzhou Wang
Kowei Shih
Dannier Li
Chengrui Zhou
ArXiv (abs)PDFHTML
Main:5 Pages
7 Figures
1 Tables
Abstract

With the rapid growth of Internet services, recommendation systems play a central role in delivering personalized content. Faced with massive user requests and complex model architectures, the key challenge for real-time recommendation systems is how to reduce inference latency and increase system throughput without sacrificing recommendation quality. This paper addresses the high computational cost and resource bottlenecks of deep learning models in real-time settings by proposing a combined set of modeling- and system-level acceleration and optimization strategies. At the model level, we dramatically reduce parameter counts and compute requirements through lightweight network design, structured pruning, and weight quantization. At the system level, we integrate multiple heterogeneous compute platforms and high-performance inference libraries, and we design elastic inference scheduling and load-balancing mechanisms based on real-time load characteristics. Experiments show that, while maintaining the original recommendation accuracy, our methods cut latency to less than 30% of the baseline and more than double system throughput, offering a practical solution for deploying large-scale online recommendation services.

View on arXiv
@article{shao2025_2506.11421,
  title={ Deep Learning Model Acceleration and Optimization Strategies for Real-Time Recommendation Systems },
  author={ Junli Shao and Jing Dong and Dingzhou Wang and Kowei Shih and Dannier Li and Chengrui Zhou },
  journal={arXiv preprint arXiv:2506.11421},
  year={ 2025 }
}
Comments on this paper