ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.03463
  4. Cited By
Sponge Examples: Energy-Latency Attacks on Neural Networks

Sponge Examples: Energy-Latency Attacks on Neural Networks

5 June 2020
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
    SILM
ArXivPDFHTML

Papers citing "Sponge Examples: Energy-Latency Attacks on Neural Networks"

23 / 23 papers shown
Title
Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning
Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning
Syed Mhamudul Hasan
Hussein Zangoti
Iraklis Anagnostopoulos
Abdur R. Shahid
AAML
34
0
0
09 May 2025
Safety in Large Reasoning Models: A Survey
Safety in Large Reasoning Models: A Survey
Cheng Wang
Yong-Jin Liu
Yangqiu Song
Duzhen Zhang
ZeLin Li
Junfeng Fang
Bryan Hooi
LRM
156
1
0
24 Apr 2025
OverThink: Slowdown Attacks on Reasoning LLMs
OverThink: Slowdown Attacks on Reasoning LLMs
A. Kumar
Jaechul Roh
A. Naseh
Marzena Karpinska
Mohit Iyyer
Amir Houmansadr
Eugene Bagdasarian
LRM
64
14
0
04 Feb 2025
Position: A taxonomy for reporting and describing AI security incidents
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
78
0
0
19 Dec 2024
Non-Halting Queries: Exploiting Fixed Points in LLMs
Non-Halting Queries: Exploiting Fixed Points in LLMs
Ghaith Hammouri
Kemal Derya
B. Sunar
33
0
0
08 Oct 2024
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms
  in Vision Transformers
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers
Oryan Yehezkel
Alon Zolfi
Amit Baras
Yuval Elovici
A. Shabtai
AAML
32
0
0
04 Feb 2024
Verifiable Sustainability in Data Centers
Verifiable Sustainability in Data Centers
Syed Rafiul Hussain
Patrick McDaniel
Anshul Gandhi
K. Ghose
Kartik Gopalan
Dongyoon Lee
Yu Liu
Zhen Liu
Shuai Mu
E. Zadok
39
1
0
22 Jul 2023
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware
  Training
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Dario Lazzaro
Antonio Emanuele Cinà
Maura Pintor
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
27
6
0
01 Jul 2023
Overload: Latency Attacks on Object Detection for Edge Devices
Overload: Latency Attacks on Object Detection for Edge Devices
Erh-Chung Chen
Pin-Yu Chen
I-Hsin Chung
Che-Rung Lee
AAML
36
12
0
11 Apr 2023
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
42
4
0
12 Dec 2022
Architectural Backdoors in Neural Networks
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
18
23
0
15 Jun 2022
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep
  Object Detectors
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors
Avishag Shapira
Alon Zolfi
Luca Demetrio
Battista Biggio
A. Shabtai
AAML
29
30
0
26 May 2022
Energy-Latency Attacks via Sponge Poisoning
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
47
29
0
14 Mar 2022
Towards a Responsible AI Development Lifecycle: Lessons From Information
  Security
Towards a Responsible AI Development Lifecycle: Lessons From Information Security
Erick Galinkin
SILM
19
6
0
06 Mar 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
28
9
0
05 Feb 2022
Who's Afraid of Thomas Bayes?
Who's Afraid of Thomas Bayes?
Erick Galinkin
AAML
28
0
0
30 Jul 2021
Bad Characters: Imperceptible NLP Attacks
Bad Characters: Imperceptible NLP Attacks
Nicholas Boucher
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAML
SILM
41
103
0
18 Jun 2021
Property Inference Attacks on Convolutional Neural Networks: Influence
  and Implications of Target Model's Complexity
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
19
33
0
27 Apr 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities
  in Machine Learning Systems
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
27
10
0
18 Apr 2021
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
  on Multi-Tenant FPGAs
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs
Andrew Boutros
Mathew Hall
Nicolas Papernot
Vaughn Betz
13
38
0
14 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1