ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06440
  4. Cited By
Towards A Flexible Accuracy-Oriented Deep Learning Module Inference
  Latency Prediction Framework for Adaptive Optimization Algorithms
v1v2 (latest)

Towards A Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms

11 December 2023
Jingran Shen
Nikos Tziritas
Georgios Theodoropoulos
ArXiv (abs)PDFHTML

Papers citing "Towards A Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms"

5 / 5 papers shown
Title
PTEENet: Post-Trained Early-Exit Neural Networks Augmentation for Inference Cost Optimization
PTEENet: Post-Trained Early-Exit Neural Networks Augmentation for Inference Cost Optimization
Assaf Lahiany
Yehudit Aperstein
74
4
0
07 Jan 2025
Multi-user Co-inference with Batch Processing Capable Edge Server
Multi-user Co-inference with Batch Processing Capable Edge Server
Wenqi Shi
Sheng Zhou
Z. Niu
Miao Jiang
Lu Geng
109
23
0
03 Jun 2022
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Amin Banitalebi-Dehkordi
Naveen Vedula
J. Pei
Fei Xia
Lanjun Wang
Yong Zhang
60
92
0
30 Aug 2021
Joint Multi-User DNN Partitioning and Computational Resource Allocation
  for Collaborative Edge Intelligence
Joint Multi-User DNN Partitioning and Computational Resource Allocation for Collaborative Edge Intelligence
Xin Tang
Xu Chen
Liekang Zeng
Shuai Yu
Lin Chen
44
93
0
15 Jul 2020
Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge
  Computing
Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing
En Li
Liekang Zeng
Zhi Zhou
Xu Chen
52
628
0
04 Oct 2019
1