Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.10373
Cited By
HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
20 August 2023
Hejia Geng
Peng Li
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds"
44 / 44 papers shown
Title
Flashy Backdoor: Real-world Environment Backdoor Attack on SNNs with DVS Cameras
Roberto Riaño
Gorka Abad
S. Picek
A. Urbieta
AAML
75
0
0
05 Nov 2024
Robust Stable Spiking Neural Networks
Jianhao Ding
Zhiyu Pan
Yujia Liu
Zhaofei Yu
Tiejun Huang
AAML
76
7
0
31 May 2024
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Yujia Liu
Tong Bu
Jianhao Ding
Zecheng Hao
Tiejun Huang
Zhaofei Yu
AAML
76
5
0
30 May 2024
Adversarially Robust Spiking Neural Networks Through Conversion
Ozan Özdenizci
Robert Legenstein
AAML
56
10
0
15 Nov 2023
Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples
Nuo Xu
Kaleel Mahmood
Haowen Fang
Ethan Rathbun
Caiwen Ding
Wujie Wen
AAML
57
13
0
07 Sep 2022
Toward Robust Spiking Neural Network Against Adversarial Perturbation
Ling Liang
Kaidi Xu
Xing Hu
Lei Deng
Yuan Xie
AAML
50
16
0
12 Apr 2022
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise
Souvik Kundu
Massoud Pedram
Peter A. Beerel
AAML
68
75
0
06 Oct 2021
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Florian Tramèr
AAML
71
68
0
24 Jul 2021
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
Rida El-Allami
Alberto Marchisio
Mohamed Bennai
Ihsen Alouani
AAML
56
39
0
09 Dec 2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
S. Silva
Peyman Najafirad
AAML
OOD
47
134
0
01 Jul 2020
Towards Understanding the Effect of Leak in Spiking Neural Networks
Sayeed Shafayet Chowdhury
Chankyu Lee
Kaushik Roy
44
57
0
15 Jun 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
136
89
0
23 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
213
1,842
0
03 Mar 2020
Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks
Wenrui Zhang
Peng Li
71
220
0
24 Feb 2020
Rapid online learning and robust recall in a neuromorphic olfactory circuit
N. Imam
T. A. Cleland
52
141
0
17 Jun 2019
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
Saima Sharmin
Priyadarshini Panda
Syed Shakib Sarwar
Chankyu Lee
Wachirawit Ponghiran
Kaushik Roy
AAML
44
67
0
07 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
89
1,839
0
06 May 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
56
152
0
01 Apr 2019
Enabling Spike-based Backpropagation for Training Deep Neural Network Architectures
Chankyu Lee
Syed Shakib Sarwar
Priyadarshini Panda
G. Srinivasan
Kaushik Roy
76
396
0
15 Mar 2019
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELM
AAML
81
901
0
18 Feb 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
83
319
0
29 Jan 2019
Surrogate Gradient Learning in Spiking Neural Networks
Emre Neftci
Hesham Mostafa
Friedemann Zenke
87
1,236
0
28 Jan 2019
Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift
Stephan Rabanser
Stephan Günnemann
Zachary Chase Lipton
54
367
0
29 Oct 2018
Adversarial Attacks and Defences: A Survey
Anirban Chakraborty
Manaar Alam
Vishal Dey
Anupam Chattopadhyay
Debdeep Mukhopadhyay
AAML
OOD
69
679
0
28 Sep 2018
Long short-term memory and learning-to-learn in networks of spiking neurons
G. Bellec
Darjan Salaj
Anand Subramoney
Robert Legenstein
Wolfgang Maass
137
487
0
26 Mar 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
216
3,185
0
01 Feb 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Ajmal Mian
AAML
93
1,868
0
02 Jan 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
122
1,409
0
08 Dec 2017
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
113
1,058
0
06 Nov 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
280
8,878
0
25 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
301
12,063
0
19 Jun 2017
Spatio-Temporal Backpropagation for Training High-performance Spiking Neural Networks
Yujie Wu
Lei Deng
Guoqi Li
Jun Zhu
Luping Shi
62
1,021
0
08 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
118
1,857
0
20 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,725
0
19 May 2017
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
294
19,981
0
07 Oct 2016
Robustness of classifiers: from adversarial to random noise
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
87
374
0
31 Aug 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
534
5,897
0
08 Jul 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILM
AAML
112
1,740
0
24 May 2016
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
148
4,895
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
271
19,045
0
20 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.6K
100,330
0
04 Sep 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
268
14,912
1
21 Dec 2013
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
589
15,876
0
12 Nov 2013
1