ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.12819
10
20

Fifer: Tackling Underutilization in the Serverless Era

28 August 2020
Jashwant Raj Gunasekaran
P. Thinakaran
N. Nachiappan
M. Kandemir
Chita R. Das
ArXivPDFHTML
Abstract

Datacenters are witnessing a rapid surge in the adoption of serverless functions for microservices-based applications. A vast majority of these microservices typically span less than a second, have strict SLO requirements, and are chained together as per the requirements of an application. The aforementioned characteristics introduce a new set of challenges, especially in terms of container provisioning and management, as the state-of-the-art resource management frameworks, employed in serverless platforms, tend to look at microservice-based applications similar to conventional monolithic applications. Hence, these frameworks suffer from microservice-agnostic scheduling and colossal container over-provisioning, especially during workload fluctuations, thereby resulting in poor resource utilization. In this work, we quantify the above shortcomings using a variety of workloads on a multi-node cluster managed by Kubernetes and Brigade serverless framework. To address them, we propose \emph{Fifer} -- an adaptive resource management framework to efficiently manage function-chains on serverless platforms. The key idea is to make \emph{Fifer} (i) utilization conscious by efficiently bin packing jobs to fewer containers using function-aware container scaling and intelligent request batching, and (ii) at the same time, SLO-compliant by proactively spawning containers to avoid cold-starts, thus minimizing the overall response latency. Combining these benefits, \emph{Fifer} improves container utilization and cluster-wide energy consumption by 4x and 31%, respectively, without compromising on SLO's, when compared to the state-of-the-art schedulers employed by serverless platforms.

View on arXiv
Comments on this paper