ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.00969
87
53
v1v2v3 (latest)

YerbaBuena: Securing Deep Learning Inference Data via Enclave-based Ternary Model Partitioning

3 July 2018
Zhongshu Gu
Heqing Huang
Jialong Zhang
D. Su
Hani Jamjoom
Ankita Lamba
Dimitrios E. Pendarakis
Ian Molloy
ArXiv (abs)PDFHTML
Abstract

Deploying and serving deep learning (DL) models in the public cloud facilitates the process to bootstrap artificial intelligence (AI) services. Yet, preserving the confidentiality of sensitive input data remains a concern to most service users. Accidental disclosures of user input data may breach increasingly stringent data protection regulations and inflict reputation damage. In this paper, we systematically investigate the life cycles of input data in deep learning image classification pipelines and further identify the potential places for information disclosures. Based on the discovered insights, we build YerbaBuena, an enclave-based model serving system to protect the confidentiality and integrity of user input data. To accommodate the performance and capacity limitations of today's enclave technology, we employ a Ternary Model Partitioning strategy that allows service users to securely partition their proprietary DL models on local machines. Therefore, we can (I) enclose sensitive computation in a secure enclave to mitigate input information disclosures and (II) delegate non-sensitive workloads to run out of enclave with hardware-assisted DL acceleration. Our comprehensive partitioning analysis and workload measurement demonstrate how users can automatically determine the optimal partitioning for their models, thus to maximize confidentiality guarantees with low performance costs.

View on arXiv
Comments on this paper