ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.01776
  4. Cited By
InferLine: ML Prediction Pipeline Provisioning and Management for Tight
  Latency Objectives

InferLine: ML Prediction Pipeline Provisioning and Management for Tight Latency Objectives

5 December 2018
D. Crankshaw
Gur-Eyal Sela
Corey Zumar
Xiangxi Mo
Joseph E. Gonzalez
Ion Stoica
Alexey Tumanov
ArXivPDFHTML

Papers citing "InferLine: ML Prediction Pipeline Provisioning and Management for Tight Latency Objectives"

4 / 4 papers shown
Title
AutonoML: Towards an Integrated Framework for Autonomous Machine
  Learning
AutonoML: Towards an Integrated Framework for Autonomous Machine Learning
D. Kedziora
Katarzyna Musial
Bogdan Gabrys
30
16
0
23 Dec 2020
Online Learning Demands in Max-min Fairness
Online Learning Demands in Max-min Fairness
Kirthevasan Kandasamy
Gur-Eyal Sela
Joseph E. Gonzalez
Michael I. Jordan
Ion Stoica
FaML
14
15
0
15 Dec 2020
A Tensor Compiler for Unified Machine Learning Prediction Serving
A Tensor Compiler for Unified Machine Learning Prediction Serving
Supun Nakandala Karla Saur
Karla Saur
Gyeong-In Yu
Konstantinos Karanasos
Carlo Curino
Markus Weimer
Matteo Interlandi
37
53
0
09 Oct 2020
The OoO VLIW JIT Compiler for GPU Inference
The OoO VLIW JIT Compiler for GPU Inference
Paras Jain
Xiangxi Mo
Ajay Jain
Alexey Tumanov
Joseph E. Gonzalez
Ion Stoica
39
17
0
28 Jan 2019
1