ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.10562
  4. Cited By
Investigating the significance of adversarial attacks and their relation
  to interpretability for radar-based human activity recognition systems

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

26 January 2021
Utku Ozbulak
Baptist Vandersmissen
A. Jalalvand
Ivo Couckuyt
Arnout Van Messem
W. D. Neve
    AAML
ArXivPDFHTML

Papers citing "Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems"

22 / 22 papers shown
Title
Skip Connections Matter: On the Transferability of Adversarial Examples
  Generated with ResNets
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
Dongxian Wu
Yisen Wang
Shutao Xia
James Bailey
Xingjun Ma
AAML
SILM
76
313
0
14 Feb 2020
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
162
0
10 May 2019
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for
  Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Chun-Chen Tu
Pai-Shun Ting
Pin-Yu Chen
Sijia Liu
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Shin-Ming Cheng
MLAU
AAML
84
397
0
30 May 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
224
3,186
0
01 Feb 2018
Adversarial Patch
Adversarial Patch
Tom B. Brown
Dandelion Mané
Aurko Roy
Martín Abadi
Justin Gilmer
AAML
76
1,094
0
27 Dec 2017
Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
Kensho Hara
Hirokatsu Kataoka
Y. Satoh
3DPC
126
1,934
0
27 Nov 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
147
682
0
26 Nov 2017
Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable
  Sensors
Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable Sensors
Yu Zhao
Rennong Yang
Guillaume Chevalier
Maoguo Gong
BDL
HAI
59
313
0
22 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
157
2,153
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
307
12,069
0
19 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
123
1,857
0
20 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
201
3,873
0
10 Apr 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
514
10,330
0
16 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
136
2,527
0
26 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,555
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
540
5,897
0
08 Jul 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,701
0
10 Jun 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
250
9,319
0
14 Dec 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,927
1
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,295
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
595
15,882
0
12 Nov 2013
Speech Recognition with Deep Recurrent Neural Networks
Speech Recognition with Deep Recurrent Neural Networks
Alex Graves
Abdel-rahman Mohamed
Geoffrey E. Hinton
226
8,517
0
22 Mar 2013
1