ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.02432
  4. Cited By
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
  Neural Network Inference

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

6 October 2020
Sanghyun Hong
Yigitcan Kaya
Ionut-Vlad Modoranu
Tudor Dumitras
    AAML
ArXivPDFHTML

Papers citing "A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference"

13 / 13 papers shown
Title
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
ORXE: Orchestrating Experts for Dynamically Configurable Efficiency
Qingyuan Wang
Guoxin Wang
B. Cardiff
Deepu John
38
0
0
07 May 2025
Impact Analysis of Inference Time Attack of Perception Sensors on Autonomous Vehicles
Impact Analysis of Inference Time Attack of Perception Sensors on Autonomous Vehicles
Hanlin Chen
Simin Chen
Wenyu Li
Wei Yang
Yiheng Feng
AAML
143
0
0
05 May 2025
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
57
1
0
05 Sep 2024
Tiny Models are the Computational Saver for Large Models
Tiny Models are the Computational Saver for Large Models
Qingyuan Wang
B. Cardiff
Antoine Frappé
Benoît Larras
Deepu John
46
2
0
26 Mar 2024
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms
  in Vision Transformers
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers
Oryan Yehezkel
Alon Zolfi
Amit Baras
Yuval Elovici
A. Shabtai
AAML
35
0
0
04 Feb 2024
Overload: Latency Attacks on Object Detection for Edge Devices
Overload: Latency Attacks on Object Detection for Edge Devices
Erh-Chung Chen
Pin-Yu Chen
I-Hsin Chung
Che-Rung Lee
AAML
44
12
0
11 Apr 2023
GradMDM: Adversarial Attack on Dynamic Networks
GradMDM: Adversarial Attack on Dynamic Networks
Jianhong Pan
Lin Geng Foo
Qichen Zheng
Zhipeng Fan
Hossein Rahmani
Qiuhong Ke
Jun Liu
AAML
18
6
0
01 Apr 2023
Fixing Overconfidence in Dynamic Neural Networks
Fixing Overconfidence in Dynamic Neural Networks
Lassi Meronen
Martin Trapp
Andrea Pilzer
Le Yang
Arno Solin
BDL
37
16
0
13 Feb 2023
Understanding the Robustness of Multi-Exit Models under Common
  Corruptions
Understanding the Robustness of Multi-Exit Models under Common Corruptions
Akshay Mehra
Skyler Seto
Navdeep Jaitly
B. Theobald
AAML
24
3
0
03 Dec 2022
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep
  Object Detectors
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors
Avishag Shapira
Alon Zolfi
Christian Scano
Battista Biggio
A. Shabtai
AAML
34
30
0
26 May 2022
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Tian Dong
Han Qiu
Tianwei Zhang
Jiwei Li
Hewu Li
Jialiang Lu
AAML
39
8
0
07 Oct 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
28
69
0
17 Sep 2021
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,113
0
04 Nov 2016
1