ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.00590
  4. Cited By
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

1 August 2018
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
    OffRL
ArXivPDFHTML

Papers citing "MLCapsule: Guarded Offline Deployment of Machine Learning as a Service"

41 / 41 papers shown
Title
Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks
Decoupled Distillation to Erase: A General Unlearning Method for Any Class-centric Tasks
Yu Zhou
Dian Zheng
Qijie Mo
Renjie Lu
Kun-Yu Lin
Wei-Shi Zheng
MU
70
1
0
31 Mar 2025
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
39
2
0
15 Nov 2024
Data Poisoning and Leakage Analysis in Federated Learning
Data Poisoning and Leakage Analysis in Federated Learning
Wenqi Wei
Tiansheng Huang
Zachary Yahn
Anoop Singhal
Margaret Loper
Ling Liu
FedML
SILM
33
0
0
19 Sep 2024
Graph Transductive Defense: a Two-Stage Defense for Graph Membership
  Inference Attacks
Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks
Peizhi Niu
Chao Pan
Siheng Chen
Olgica Milenkovic
AAML
35
0
0
12 Jun 2024
Tempo: Confidentiality Preservation in Cloud-Based Neural Network
  Training
Tempo: Confidentiality Preservation in Cloud-Based Neural Network Training
Rongwu Xu
Zhixuan Fang
FedML
28
0
0
21 Jan 2024
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
Yue Niu
Ramy E. Ali
Saurav Prakash
Salman Avestimehr
FedML
38
2
0
05 Dec 2023
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
FedML
37
16
0
11 Oct 2023
Mitigating Adversarial Attacks in Federated Learning with Trusted
  Execution Environments
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments
Simon Queyrut
V. Schiavoni
Pascal Felber
AAML
FedML
37
6
0
13 Sep 2023
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated
  Learning
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning
Simon Queyrut
Yérom-David Bromberg
V. Schiavoni
FedML
AAML
14
1
0
08 Aug 2023
Towards Open Federated Learning Platforms: Survey and Vision from
  Technical and Legal Perspectives
Towards Open Federated Learning Platforms: Survey and Vision from Technical and Legal Perspectives
Moming Duan
Qinbin Li
Linshan Jiang
Bingsheng He
FedML
34
4
0
05 Jul 2023
Watermarking Text Data on Large Language Models for Dataset Copyright
Watermarking Text Data on Large Language Models for Dataset Copyright
Yixin Liu
Hongsheng Hu
Xun Chen
Xuyun Zhang
Lichao Sun
WaLM
21
22
0
22 May 2023
Boundary Unlearning
Boundary Unlearning
Min Chen
Weizhuo Gao
Gaoyang Liu
Kai Peng
Chen Wang
MU
109
71
0
21 Mar 2023
A Survey of Secure Computation Using Trusted Execution Environments
A Survey of Secure Computation Using Trusted Execution Environments
Xiaoguo Li
Bowen Zhao
Guomin Yang
Tao Xiang
J. Weng
R. Deng
29
9
0
23 Feb 2023
Proof of Unlearning: Definitions and Instantiation
Proof of Unlearning: Definitions and Instantiation
Jiasi Weng
Shenglong Yao
Yuefeng Du
Junjie Huang
Jian Weng
Cong Wang
MU
37
12
0
20 Oct 2022
Machine Learning with Confidential Computing: A Systematization of
  Knowledge
Machine Learning with Confidential Computing: A Systematization of Knowledge
Fan Mo
Zahra Tarkhani
Hamed Haddadi
40
8
0
22 Aug 2022
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
TinyMLOps: Operational Challenges for Widespread Edge AI Adoption
Sam Leroux
Pieter Simoens
Meelis Lootus
Kartik Thakore
Akshay Sharma
37
16
0
21 Mar 2022
Modelling of Received Signals in Molecular Communication Systems based
  machine learning: Comparison of azure machine learning and Python tools
Modelling of Received Signals in Molecular Communication Systems based machine learning: Comparison of azure machine learning and Python tools
Soha Mohamed
M. S. Fayed
26
1
0
19 Dec 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
3LegRace: Privacy-Preserving DNN Training over TEEs and GPUs
3LegRace: Privacy-Preserving DNN Training over TEEs and GPUs
Yue Niu
Ramy E. Ali
Salman Avestimehr
FedML
56
17
0
04 Oct 2021
Data science and Machine learning in the Clouds: A Perspective for the
  Future
Data science and Machine learning in the Clouds: A Perspective for the Future
H. Barua
16
5
0
02 Sep 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
33
31
0
01 Sep 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
23
71
0
04 Jul 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
ShadowNet: A Secure and Efficient On-device Model Inference System for
  Convolutional Neural Networks
ShadowNet: A Secure and Efficient On-device Model Inference System for Convolutional Neural Networks
Zhichuang Sun
Ruimin Sun
Changming Liu
A. Chowdhury
Long Lu
S. Jha
FedML
29
18
0
11 Nov 2020
Offline Model Guard: Secure and Private ML on Mobile Devices
Offline Model Guard: Secure and Private ML on Mobile Devices
Sebastian P. Bayerl
Tommaso Frassetto
Patrick Jauernig
Korbinian Riedhammer
A. Sadeghi
T. Schneider
Emmanuel Stapf
Christian Weinert
OffRL
18
45
0
05 Jul 2020
BoMaNet: Boolean Masking of an Entire Neural Network
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
25
44
0
16 Jun 2020
DarKnight: A Data Privacy Scheme for Training and Inference of Deep
  Neural Networks
DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks
H. Hashemi
Yongqin Wang
M. Annavaram
FedML
11
26
0
01 Jun 2020
Parallelizing Machine Learning as a Service for the End-User
Parallelizing Machine Learning as a Service for the End-User
Daniela Loreti
Marco Lippi
Paolo Torroni
FedML
14
9
0
28 May 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
30
83
0
18 May 2020
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
  Environments
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Fan Mo
Ali Shahin Shamsabadi
Kleomenis Katevas
Soteris Demetriou
Ilias Leontiadis
Andrea Cavallaro
Hamed Haddadi
FedML
18
175
0
12 Apr 2020
DeepHammer: Depleting the Intelligence of Deep Neural Networks through
  Targeted Chain of Bit Flips
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Fan Yao
Adnan Siraj Rakin
Deliang Fan
AAML
18
155
0
30 Mar 2020
Not All Features Are Equal: Discovering Essential Features for
  Preserving Prediction Privacy
Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy
Fatemehsadat Mireshghallah
Mohammadkazem Taram
A. Jalali
Ahmed T. Elthakeb
Dean Tullsen
H. Esmaeilzadeh
14
12
0
26 Mar 2020
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
22
31
0
27 Nov 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
19
11
0
17 Jul 2019
Shredder: Learning Noise Distributions to Protect Inference Privacy
Shredder: Learning Noise Distributions to Protect Inference Privacy
Fatemehsadat Mireshghallah
Mohammadkazem Taram
Prakash Ramrakhyani
Dean Tullsen
H. Esmaeilzadeh
11
11
0
26 May 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
36
56
0
22 May 2019
A First Look at Deep Learning Apps on Smartphones
A First Look at Deep Learning Apps on Smartphones
Mengwei Xu
Jiawei Liu
Yuanqiang Liu
F. Lin
Yunxin Liu
Xuanzhe Liu
HAI
33
177
0
08 Nov 2018
A Roadmap Towards Resilient Internet of Things for Cyber-Physical
  Systems
A Roadmap Towards Resilient Internet of Things for Cyber-Physical Systems
Denise Ratasich
Faiq Khalid
Florian Geissler
Radu Grosu
Muhammad Shafique
E. Bartocci
27
102
0
16 Oct 2018
Slalom: Fast, Verifiable and Private Execution of Neural Networks in
  Trusted Hardware
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Florian Tramèr
Dan Boneh
FedML
114
395
0
08 Jun 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACV
MIALM
41
928
0
04 Jun 2018
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
Marta Kwiatkowska
Sen Wang
Min Wu
AAML
183
932
0
21 Oct 2016
1