ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02533
  4. Cited By
Adversarial examples in the physical world

Adversarial examples in the physical world

8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
    SILM
    AAML
ArXivPDFHTML

Papers citing "Adversarial examples in the physical world"

50 / 1,598 papers shown
Title
A Kings Ransom for Encryption: Ransomware Classification using Augmented
  One-Shot Learning and Bayesian Approximation
A Kings Ransom for Encryption: Ransomware Classification using Augmented One-Shot Learning and Bayesian Approximation
Amir Atapour-Abarghouei
Stephen Bonner
A. Mcgough
12
7
0
19 Aug 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
34
176
0
17 Aug 2019
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
J. Hopcroft
AAML
11
552
0
17 Aug 2019
Adversarial shape perturbations on 3D point clouds
Adversarial shape perturbations on 3D point clouds
Daniel Liu
Ronald Yu
Hao Su
3DPC
17
12
0
16 Aug 2019
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target
  Adversarial Network Once
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once
Jiangfan Han
Xiaoyi Dong
Ruimao Zhang
Dongdong Chen
Weiming Zhang
Nenghai Yu
Ping Luo
Xiaogang Wang
AAML
11
28
0
14 Aug 2019
Defending Against Adversarial Iris Examples Using Wavelet Decomposition
Defending Against Adversarial Iris Examples Using Wavelet Decomposition
Sobhan Soleymani
Ali Dabouei
J. Dawson
Nasser M. Nasrabadi
AAML
17
9
0
08 Aug 2019
Universal Adversarial Audio Perturbations
Universal Adversarial Audio Perturbations
Sajjad Abdoli
L. G. Hafemann
Jérôme Rony
Ismail Ben Ayed
P. Cardinal
Alessandro Lameiras Koerich
AAML
25
51
0
08 Aug 2019
Impact of Adversarial Examples on Deep Learning Models for Biomedical
  Image Segmentation
Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Utku Ozbulak
Arnout Van Messem
W. D. Neve
MedIm
AAML
10
60
0
30 Jul 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
20
54
0
26 Jul 2019
Natural Adversarial Examples
Natural Adversarial Examples
Dan Hendrycks
Kevin Zhao
Steven Basart
Jacob Steinhardt
D. Song
OODD
36
1,417
0
16 Jul 2019
Measuring the Transferability of Adversarial Examples
Measuring the Transferability of Adversarial Examples
D. Petrov
Timothy M. Hospedales
SILM
AAML
13
23
0
14 Jul 2019
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for
  Autonomous Driving
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving
Zelun Kong
Junfeng Guo
Ang Li
Cong Liu
AAML
25
126
0
09 Jul 2019
Detecting and Diagnosing Adversarial Images with Class-Conditional
  Capsule Reconstructions
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
Yao Qin
Nicholas Frosst
S. Sabour
Colin Raffel
G. Cottrell
Geoffrey E. Hinton
GAN
AAML
17
71
0
05 Jul 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary
  Attack
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
24
474
0
03 Jul 2019
Learning to Cope with Adversarial Attacks
Learning to Cope with Adversarial Attacks
Xian Yeow Lee
Aaron J. Havens
Girish Chowdhary
S. Sarkar
AAML
17
5
0
28 Jun 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
11
36
0
27 Jun 2019
Defending Adversarial Attacks by Correcting logits
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya-Qin Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
18
5
0
26 Jun 2019
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case
  Study on CNN-Based Lithographic Hotspot Detection
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection
Kang Liu
Haoyu Yang
Yuzhe Ma
Benjamin Tan
Bei Yu
Evangeline F. Y. Young
Ramesh Karri
S. Garg
AAML
7
10
0
25 Jun 2019
Perceptual Based Adversarial Audio Attacks
Perceptual Based Adversarial Audio Attacks
Joseph Szurley
J. Zico Kolter
AAML
11
25
0
14 Jun 2019
Adversarial Mahalanobis Distance-based Attentive Song Recommender for
  Automatic Playlist Continuation
Adversarial Mahalanobis Distance-based Attentive Song Recommender for Automatic Playlist Continuation
Thanh-Binh Tran
Renee Sweeney
Kyumin Lee
17
32
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
24
42
0
07 Jun 2019
Multi-way Encoding for Robustness
Multi-way Encoding for Robustness
Donghyun Kim
Sarah Adel Bargal
Jianming Zhang
Stan Sclaroff
AAML
8
2
0
05 Jun 2019
Interpretable Neural Network Decoupling
Interpretable Neural Network Decoupling
Yuchao Li
R. Ji
Shaohui Lin
Baochang Zhang
Chenqian Yan
Yongjian Wu
Feiyue Huang
Ling Shao
21
2
0
04 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
23
3
0
01 Jun 2019
Residual Networks as Nonlinear Systems: Stability Analysis using
  Linearization
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization
Kai Rothauge
Z. Yao
Zixi Hu
Michael W. Mahoney
11
2
0
31 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
13
63
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
8
77
0
27 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
14
157
0
23 May 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
11
35
0
22 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
14
52
0
20 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
18
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
11
5
0
19 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
15
75
0
17 May 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
19
24
0
08 May 2019
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural
  Networks
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
Saima Sharmin
Priyadarshini Panda
Syed Shakib Sarwar
Chankyu Lee
Wachirawit Ponghiran
Kaushik Roy
AAML
14
64
0
07 May 2019
Test Selection for Deep Learning Systems
Test Selection for Deep Learning Systems
Wei Ma
Mike Papadakis
Anestis Tsakmalis
Maxime Cordy
Yves Le Traon
OOD
13
91
0
30 Apr 2019
Data Poisoning Attack against Knowledge Graph Embedding
Data Poisoning Attack against Knowledge Graph Embedding
Hengtong Zhang
T. Zheng
Jing Gao
Chenglin Miao
Lu Su
Yaliang Li
K. Ren
KELM
10
81
0
26 Apr 2019
Adversarial Defense Through Network Profiling Based Path Extraction
Adversarial Defense Through Network Profiling Based Path Extraction
Yuxian Qiu
Jingwen Leng
Cong Guo
Quan Chen
C. Li
M. Guo
Yuhao Zhu
AAML
14
50
0
17 Apr 2019
Evading Defenses to Transferable Adversarial Examples by
  Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILM
AAML
17
827
0
05 Apr 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
8
43
0
03 Apr 2019
Curls & Whey: Boosting Black-Box Adversarial Attacks
Curls & Whey: Boosting Black-Box Adversarial Attacks
Yucheng Shi
Siyu Wang
Yahong Han
AAML
11
116
0
02 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
9
151
0
01 Apr 2019
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
Xiao Zhang
Dongrui Wu
AAML
8
81
0
31 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
Defending against Whitebox Adversarial Attacks via Randomized
  Discretization
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
14
75
0
25 Mar 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
18
25
0
25 Mar 2019
Variational Inference with Latent Space Quantization for Adversarial
  Resilience
Variational Inference with Latent Space Quantization for Adversarial Resilience
Vinay Kyatham
P. PrathoshA.
Tarun Kumar Yadav
Deepak Mishra
Dheeraj Mundhra
AAML
6
3
0
24 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial
  Learning
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
13
14
0
23 Mar 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
19
376
0
22 Mar 2019
Adversarial camera stickers: A physical camera-based attack on deep
  learning systems
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Juncheng Billy Li
Frank R. Schmidt
J. Zico Kolter
AAML
11
164
0
21 Mar 2019
Previous
123...29303132
Next