ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02533
  4. Cited By
Adversarial examples in the physical world

Adversarial examples in the physical world

8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
    SILM
    AAML
ArXivPDFHTML

Papers citing "Adversarial examples in the physical world"

50 / 2,710 papers shown
Title
Adversarial Training is a Form of Data-dependent Operator Norm
  Regularization
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
Kevin Roth
Yannic Kilcher
Thomas Hofmann
17
13
0
04 Jun 2019
Interpretable Neural Network Decoupling
Interpretable Neural Network Decoupling
Yuchao Li
Rongrong Ji
Shaohui Lin
Baochang Zhang
Chenqian Yan
Yongjian Wu
Feiyue Huang
Ling Shao
37
2
0
04 Jun 2019
A Surprising Density of Illusionable Natural Speech
A Surprising Density of Illusionable Natural Speech
M. Guan
Gregory Valiant
AAML
19
3
0
03 Jun 2019
Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in
  Deep Learning with Provable Robustness
Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness
Nhathai Phan
Minh Nhat Vu
Yang Liu
R. Jin
Dejing Dou
Xintao Wu
My T. Thai
AAML
22
51
0
02 Jun 2019
Adversarial Examples for Edge Detection: They Exist, and They Transfer
Adversarial Examples for Edge Detection: They Exist, and They Transfer
Christian Cosgrove
Alan Yuille
AAML
GAN
25
12
0
02 Jun 2019
Enhancing Transformation-based Defenses using a Distribution Classifier
Enhancing Transformation-based Defenses using a Distribution Classifier
C. Kou
H. Lee
E. Chang
Teck Khim Ng
37
3
0
01 Jun 2019
Perceptual Evaluation of Adversarial Attacks for CNN-based Image
  Classification
Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification
Sid Ahmed Fezza
Yassine Bakhti
W. Hamidouche
Olivier Déforges
AAML
14
31
0
01 Jun 2019
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty
  and Adversarial Robustness
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness
A. Malinin
Mark Gales
UQCV
AAML
27
172
0
31 May 2019
Residual Networks as Nonlinear Systems: Stability Analysis using
  Linearization
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization
Kai Rothauge
Z. Yao
Zixi Hu
Michael W. Mahoney
13
2
0
31 May 2019
Interpretable Adversarial Training for Text
Interpretable Adversarial Training for Text
Samuel Barham
S. Feizi
AAML
21
17
0
30 May 2019
Securing Connected & Autonomous Vehicles: Challenges Posed by
  Adversarial Machine Learning and The Way Forward
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
A. Qayyum
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
AAML
27
187
0
29 May 2019
CopyCAT: Taking Control of Neural Policies with Constant Attacks
CopyCAT: Taking Control of Neural Policies with Constant Attacks
Léonard Hussenot
M. Geist
Olivier Pietquin
AAML
17
30
0
29 May 2019
An Investigation of Data Poisoning Defenses for Online Learning
An Investigation of Data Poisoning Defenses for Online Learning
Yizhen Wang
Somesh Jha
Kamalika Chaudhuri
AAML
13
5
0
28 May 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
31
63
0
28 May 2019
High Frequency Component Helps Explain the Generalization of
  Convolutional Neural Networks
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks
Haohan Wang
Xindi Wu
Pengcheng Yin
Eric Xing
22
513
0
28 May 2019
Improving the Robustness of Deep Neural Networks via Adversarial
  Training with Triplet Loss
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
Pengcheng Li
Jinfeng Yi
Bowen Zhou
Lijun Zhang
AAML
37
36
0
28 May 2019
Label Universal Targeted Attack
Label Universal Targeted Attack
Naveed Akhtar
M. Jalwana
Bennamoun
Ajmal Mian
AAML
16
5
0
27 May 2019
GAT: Generative Adversarial Training for Adversarial Example Detection
  and Robust Classification
GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
Xuwang Yin
Soheil Kolouri
Gustavo K. Rohde
AAML
30
43
0
27 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Provable robustness against all adversarial $l_p$-perturbations for
  $p\geq 1$
Provable robustness against all adversarial lpl_plp​-perturbations for p≥1p\geq 1p≥1
Francesco Croce
Matthias Hein
OOD
17
75
0
27 May 2019
Non-Determinism in Neural Networks for Adversarial Robustness
Non-Determinism in Neural Networks for Adversarial Robustness
Daanish Ali Khan
Linhong Li
Ninghao Sha
Zhuoran Liu
Abelino Jiménez
Bhiksha Raj
Rita Singh
OOD
AAML
11
3
0
26 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAML
OOD
42
8
0
26 May 2019
Adversarial Distillation for Ordered Top-k Attacks
Adversarial Distillation for Ordered Top-k Attacks
Zekun Zhang
Tianfu Wu
AAML
6
2
0
25 May 2019
Trust but Verify: An Information-Theoretic Explanation for the
  Adversarial Fragility of Machine Learning Systems, and a General Defense
  against Adversarial Attacks
Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Jirong Yi
Hui Xie
Leixin Zhou
Xiaodong Wu
Weiyu Xu
R. Mudumbai
AAML
22
6
0
25 May 2019
Thwarting finite difference adversarial attacks with output
  randomization
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILM
AAML
41
0
0
23 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
28
158
0
23 May 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
21
35
0
22 May 2019
DoPa: A Comprehensive CNN Detection Methodology against Physical
  Adversarial Attacks
DoPa: A Comprehensive CNN Detection Methodology against Physical Adversarial Attacks
Zirui Xu
Fuxun Yu
Xiang Chen
AAML
11
0
0
21 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
24
52
0
20 May 2019
Predicting Model Failure using Saliency Maps in Autonomous Driving
  Systems
Predicting Model Failure using Saliency Maps in Autonomous Driving Systems
Sina Mohseni
Akshay V. Jagadeesh
Zhangyang Wang
27
13
0
19 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
37
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep
  Learning Models
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
17
14
0
17 May 2019
An Efficient Pre-processing Method to Eliminate Adversarial Effects
An Efficient Pre-processing Method to Eliminate Adversarial Effects
Hua Wang
Jie Wang
Z. Yin
AAML
14
1
0
15 May 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Yifan Jiang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
29
283
0
15 May 2019
Moving Target Defense for Deep Visual Sensing against Adversarial
  Examples
Moving Target Defense for Deep Visual Sensing against Adversarial Examples
Qun Song
Zhenyu Yan
Rui Tan
AAML
21
20
0
11 May 2019
Interpreting and Evaluating Neural Network Robustness
Interpreting and Evaluating Neural Network Robustness
Fuxun Yu
Zhuwei Qin
Chenchen Liu
Liang Zhao
Yanzhi Wang
Xiang Chen
AAML
15
55
0
10 May 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
23
157
0
10 May 2019
Exact Adversarial Attack to Image Captioning via Structured Output
  Learning with Latent Variables
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables
Yan Xu
Baoyuan Wu
Fumin Shen
Yanbo Fan
Yong Zhang
Heng Tao Shen
Wei Liu
AAML
25
55
0
10 May 2019
Exploring the Hyperparameter Landscape of Adversarial Robustness
Exploring the Hyperparameter Landscape of Adversarial Robustness
Evelyn Duesterwald
Anupama Murthi
Ganesh Venkataraman
M. Sinn
Deepak Vijaykeerthy
AAML
16
7
0
09 May 2019
Universal Adversarial Perturbations for Speech Recognition Systems
Universal Adversarial Perturbations for Speech Recognition Systems
Paarth Neekhara
Shehzeen Samarah Hussain
Prakhar Pandey
Shlomo Dubnov
Julian McAuley
F. Koushanfar
AAML
36
113
0
09 May 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural
  Networks
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
Saima Sharmin
Priyadarshini Panda
Syed Shakib Sarwar
Chankyu Lee
Wachirawit Ponghiran
Kaushik Roy
AAML
24
66
0
07 May 2019
Representation of White- and Black-Box Adversarial Examples in Deep
  Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study
Chihye Han
Wonjun Yoon
Gihyun Kwon
S. Nam
Dae-Shik Kim
AAML
18
5
0
07 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
32
26
0
05 May 2019
When Attackers Meet AI: Learning-empowered Attacks in Cooperative
  Spectrum Sensing
When Attackers Meet AI: Learning-empowered Attacks in Cooperative Spectrum Sensing
Z. Luo
Shangqing Zhao
Zhuo Lu
Jie Xu
Y. Sagduyu
AAML
25
53
0
04 May 2019
Adversarial Training with Voronoi Constraints
Adversarial Training with Voronoi Constraints
Marc Khoury
Dylan Hadfield-Menell
AAML
20
24
0
02 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
18
245
0
01 May 2019
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via
  Genetic Algorithm
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm
Jinyin Chen
Mengmeng Su
Shijing Shen
Hui Xiong
Haibin Zheng
AAML
22
67
0
01 May 2019
Previous
123...444546...535455
Next