ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.01147
  4. Cited By
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities
  of Spiking and Deep Neural Networks

Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

4 February 2019
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
    AAML
ArXivPDFHTML

Papers citing "Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks"

14 / 14 papers shown
Title
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on
  Adversarial Machine Learning
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Junaid Qadir
Mohamed Bennai
AAML
34
34
0
04 Nov 2018
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
53
233
0
13 Sep 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
53
282
0
06 Sep 2018
Deep Learning in Spiking Neural Networks
Deep Learning in Spiking Neural Networks
A. Tavanaei
M. Ghodrati
Saeed Reza Kheradpisheh
T. Masquelier
Anthony Maida
52
1,071
0
22 Apr 2018
Towards Imperceptible and Robust Adversarial Example Attacks against
  Neural Networks
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Bo Luo
Yannan Liu
Lingxiao Wei
Q. Xu
AAML
51
142
0
15 Jan 2018
Weighted Contrastive Divergence
Weighted Contrastive Divergence
E. Romero
F. Mazzanti
Jordi Delgado
David Buchaca Prats
12
23
0
08 Jan 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
269
12,029
0
19 Jun 2017
Enhancing Robustness of Machine Learning Systems via Data
  Transformations
Enhancing Robustness of Machine Learning Systems via Data Transformations
A. Bhagoji
Daniel Cullina
Chawin Sitawarin
Prateek Mittal
AAML
48
231
0
09 Apr 2017
Detecting Adversarial Samples from Artifacts
Detecting Adversarial Samples from Artifacts
Reuben Feinman
Ryan R. Curtin
S. Shintre
Andrew B. Gardner
AAML
90
892
0
01 Mar 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
517
5,893
0
08 Jul 2016
evt_MNIST: A spike based version of traditional MNIST
evt_MNIST: A spike based version of traditional MNIST
Mazdak Fatahi
M. Ahmadi
Mahyar Shahsavari
A. Ahmadi
P. Devienne
29
23
0
22 Apr 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
66
3,676
0
08 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
233
19,017
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
239
14,893
1
21 Dec 2013
1