ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02533
  4. Cited By
Adversarial examples in the physical world
v1v2v3v4 (latest)

Adversarial examples in the physical world

8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "Adversarial examples in the physical world"

50 / 2,769 papers shown
Title
Adversarial attacks hidden in plain sight
Adversarial attacks hidden in plain sight
Jan Philip Göpfert
André Artelt
H. Wersing
Barbara Hammer
AAML
46
17
0
25 Feb 2019
Visualization, Discriminability and Applications of Interpretable Saak
  Features
Visualization, Discriminability and Applications of Interpretable Saak Features
Abinaya Manimaran
T. Ramanathan
Suya You
C.-C. Jay Kuo
FAtt
90
8
0
25 Feb 2019
Physical Adversarial Attacks Against End-to-End Autoencoder
  Communication Systems
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems
Meysam Sadeghi
Erik G. Larsson
AAML
53
116
0
22 Feb 2019
Quantifying Perceptual Distortion of Adversarial Examples
Quantifying Perceptual Distortion of Adversarial Examples
Matt Jordan
N. Manoj
Surbhi Goel
A. Dimakis
68
39
0
21 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
95
211
0
21 Feb 2019
Perceptual Quality-preserving Black-Box Attack against Deep Learning
  Image Classifiers
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers
Diego Gragnaniello
Francesco Marra
Giovanni Poggi
L. Verdoliva
AAML
35
30
0
20 Feb 2019
Reconstruction of 3-D Atomic Distortions from Electron Microscopy with
  Deep Learning
Reconstruction of 3-D Atomic Distortions from Electron Microscopy with Deep Learning
N. Laanait
Qian He
A. Borisevich
49
8
0
19 Feb 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
147
905
0
18 Feb 2019
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
Yueyao Yu
Pengfei Yu
Wenye Li
AAML
18
6
0
18 Feb 2019
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing
  AutoEncoder Pre-training
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training
S. Kokalj-Filipovic
Rob Miller
Nicholas Chang
Chi Leung Lau
AAML
54
41
0
16 Feb 2019
Adversarial Examples in RF Deep Learning: Detection of the Attack and
  its Physical Robustness
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness
S. Kokalj-Filipovic
Rob Miller
AAML
60
31
0
16 Feb 2019
DeepFault: Fault Localization for Deep Neural Networks
DeepFault: Fault Localization for Deep Neural Networks
Hasan Ferit Eniser
Simos Gerasimou
A. Sen
AAML
81
88
0
15 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
80
176
0
13 Feb 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
99
39
0
12 Feb 2019
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
Michal Uřičář
P. Krízek
David Hurych
Ibrahim Sobh
S. Yogamani
Patrick Denny
GAN
99
58
0
09 Feb 2019
When Causal Intervention Meets Adversarial Examples and Image Masking
  for Deep Neural Networks
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Chao-Han Huck Yang
Yi-Chieh Liu
Pin-Yu Chen
Xiaoli Ma
Y. Tsai
BDLAAMLCML
80
21
0
09 Feb 2019
Discretization based Solutions for Secure Machine Learning against
  Adversarial Attacks
Discretization based Solutions for Secure Machine Learning against Adversarial Attacks
Priyadarshini Panda
I. Chakraborty
Kaushik Roy
AAML
62
40
0
08 Feb 2019
Understanding the One-Pixel Attack: Propagation Maps and Locality
  Analysis
Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis
Danilo Vasconcellos Vargas
Jiawei Su
FAttAAML
41
38
0
08 Feb 2019
Robustness Of Saak Transform Against Adversarial Attacks
Robustness Of Saak Transform Against Adversarial Attacks
T. Ramanathan
Abinaya Manimaran
Suya You
C.-C. Jay Kuo
76
5
0
07 Feb 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via
  Adversarial Examples
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
75
40
0
06 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAMLFAtt
126
206
0
06 Feb 2019
Analyzing and Improving Representations with the Soft Nearest Neighbor
  Loss
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss
Nicholas Frosst
Nicolas Papernot
Geoffrey E. Hinton
85
160
0
05 Feb 2019
Theoretical evidence for adversarial robustness through randomization
Theoretical evidence for adversarial robustness through randomization
Rafael Pinot
Laurent Meunier
Alexandre Araujo
H. Kashima
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
110
83
0
04 Feb 2019
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities
  of Spiking and Deep Neural Networks
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
AAML
55
7
0
04 Feb 2019
Robustness of Generalized Learning Vector Quantization Models against
  Adversarial Attacks
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
49
19
0
01 Feb 2019
Robustness Certificates Against Adversarial Examples for ReLU Networks
Robustness Certificates Against Adversarial Examples for ReLU Networks
Sahil Singla
Soheil Feizi
AAML
68
21
0
01 Feb 2019
Adaptive Gradient for Adversarial Perturbations Generation
Yatie Xiao
Chi-Man Pun
ODL
69
10
0
01 Feb 2019
A New Family of Neural Networks Provably Resistant to Adversarial
  Attacks
A New Family of Neural Networks Provably Resistant to Adversarial Attacks
Rakshit Agrawal
Luca de Alfaro
D. Helmbold
AAMLOOD
39
2
0
01 Feb 2019
Augmenting Model Robustness with Transformation-Invariant Attacks
Augmenting Model Robustness with Transformation-Invariant Attacks
Houpu Yao
Zhe Wang
Guangyu Nie
Yassine Mazboudi
Yezhou Yang
Yi Ren
AAMLOOD
31
3
0
31 Jan 2019
Adversarial Metric Attack and Defense for Person Re-identification
Adversarial Metric Attack and Defense for Person Re-identification
S. Bai
Yingwei Li
Yuyin Zhou
Qizhu Li
Philip Torr
AAML
100
17
0
30 Jan 2019
RED-Attack: Resource Efficient Decision based Attack for Machine
  Learning
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Faiq Khalid
Hassan Ali
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
71
14
0
29 Jan 2019
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule
  Networks
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Alberto Marchisio
Giorgio Nanfa
Faiq Khalid
Muhammad Abdullah Hanif
Maurizio Martina
Mohamed Bennai
GANAAML
74
26
0
28 Jan 2019
Spectrum Data Poisoning with Adversarial Deep Learning
Spectrum Data Poisoning with Adversarial Deep Learning
Yi Shi
T. Erpek
Y. Sagduyu
Jason H. Li
AAML
66
73
0
26 Jan 2019
Weighted-Sampling Audio Adversarial Example Attack
Weighted-Sampling Audio Adversarial Example Attack
Xiaolei Liu
Xiaosong Zhang
Kun Wan
Qingxin Zhu
Yufei Ding
DiffMAAML
51
36
0
26 Jan 2019
A Black-box Attack on Neural Networks Based on Swarm Evolutionary
  Algorithm
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
Xiaolei Liu
Yuheng Luo
Xiaosong Zhang
Qingxin Zhu
AAML
53
16
0
26 Jan 2019
Generative Adversarial Networks for Black-Box API Attacks with Limited
  Training Data
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
Yi Shi
Y. Sagduyu
Kemal Davaslioglu
Jason H. Li
AAML
58
29
0
25 Jan 2019
Improving Adversarial Robustness via Promoting Ensemble Diversity
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang
Kun Xu
Chao Du
Ning Chen
Jun Zhu
AAML
106
441
0
25 Jan 2019
Sitatapatra: Blocking the Transfer of Adversarial Samples
Sitatapatra: Blocking the Transfer of Adversarial Samples
Ilia Shumailov
Xitong Gao
Yiren Zhao
Robert D. Mullins
Ross J. Anderson
Chengzhong Xu
AAMLGAN
64
14
0
23 Jan 2019
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems
Tianyu Du
S. Ji
Jinfeng Li
Qinchen Gu
Ting Wang
R. Beyah
AAML
88
130
0
23 Jan 2019
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Yu Ji
Zixin Liu
Xing Hu
Peiqi Wang
Youhui Zhang
AAML
74
18
0
23 Jan 2019
Generating Adversarial Perturbation with Root Mean Square Gradient
Yatie Xiao
Chi-Man Pun
Jizhe Zhou
GAN
33
1
0
13 Jan 2019
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia
  Classification System
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Huangxun Chen
Chenyu Huang
Qianyi Huang
Qian Zhang
Wei Wang
AAML
75
28
0
12 Jan 2019
Characterizing and evaluating adversarial examples for Offline
  Handwritten Signature Verification
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
55
44
0
10 Jan 2019
Image Transformation can make Neural Networks more robust against
  Adversarial Examples
Image Transformation can make Neural Networks more robust against Adversarial Examples
D. D. Thang
Toshihiro Matsui
AAML
25
10
0
10 Jan 2019
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud
  Classifiers
Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers
Daniel Liu
Ronald Yu
Hao Su
3DPC
97
170
0
10 Jan 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAMLFedML
114
75
0
08 Jan 2019
Interpretable BoW Networks for Adversarial Example Detection
Interpretable BoW Networks for Adversarial Example Detection
Krishna Kanth Nakka
Mathieu Salzmann
GANAAML
33
0
0
08 Jan 2019
Image Super-Resolution as a Defense Against Adversarial Attacks
Image Super-Resolution as a Defense Against Adversarial Attacks
Aamir Mustafa
Salman H. Khan
Munawar Hayat
Jianbing Shen
Ling Shao
AAMLSupR
100
176
0
07 Jan 2019
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical
  Study
Adversarial Examples Versus Cloud-based Detectors: A Black-box Empirical Study
Xurong Li
S. Ji
Men Han
Juntao Ji
Zhenyu Ren
Yushan Liu
Chunming Wu
AAML
93
31
0
04 Jan 2019
Multi-Label Adversarial Perturbations
Multi-Label Adversarial Perturbations
Qingquan Song
Haifeng Jin
Xiao Huang
Helen Zhou
AAML
63
37
0
02 Jan 2019
Previous
123...474849...545556
Next