ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02533
  4. Cited By
Adversarial examples in the physical world
v1v2v3v4 (latest)

Adversarial examples in the physical world

8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "Adversarial examples in the physical world"

50 / 2,769 papers shown
Title
Noise as a Resource for Learning in Knowledge Distillation
Noise as a Resource for Learning in Knowledge Distillation
Elahe Arani
F. Sarfraz
Bahram Zonooz
57
6
0
11 Oct 2019
Universal Adversarial Perturbation for Text Classification
Universal Adversarial Perturbation for Text Classification
Hang Gao
Tim Oates
AAML
108
15
0
10 Oct 2019
Learning deep forest with multi-scale Local Binary Pattern features for
  face anti-spoofing
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing
Rizhao Cai
Changsheng Chen
AAMLCVBM
54
12
0
09 Oct 2019
Adversarial Learning of Deepfakes in Accounting
Adversarial Learning of Deepfakes in Accounting
Marco Schreyer
Timur Sattarov
Bernd Reimer
Damian Borth
AAML
63
26
0
09 Oct 2019
SmoothFool: An Efficient Framework for Computing Smooth Adversarial
  Perturbations
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
J. Dawson
Nasser M. Nasrabadi
AAML
144
19
0
08 Oct 2019
Real-time processing of high-resolution video and 3D model-based
  tracking for remote towers
Real-time processing of high-resolution video and 3D model-based tracking for remote towers
O. Barrowclough
S. Briseid
G. Muntingh
Torbjørn Viksand
32
4
0
08 Oct 2019
Yet another but more efficient black-box adversarial attack: tiling and
  evolution strategies
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies
Laurent Meunier
Cen Chen
Li Wang
MLAUAAML
133
40
0
05 Oct 2019
Analyzing and Improving Neural Networks by Generating Semantic
  Counterexamples through Differentiable Rendering
Analyzing and Improving Neural Networks by Generating Semantic Counterexamples through Differentiable Rendering
Lakshya Jain
Varun Chandrasekaran
Uyeong Jang
Wilson Wu
Andrew Lee
Andy Yan
Steven Chen
S. Jha
Sanjit A. Seshia
AAML
72
11
0
02 Oct 2019
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
Yang Zhang
Shiyu Chang
Mo Yu
Kaizhi Qian
AAML
29
2
0
01 Oct 2019
Attacking CNN-based anti-spoofing face authentication in the physical
  domain
Attacking CNN-based anti-spoofing face authentication in the physical domain
Bowen Zhang
B. Tondi
Mauro Barni
CVBMAAML
51
5
0
01 Oct 2019
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples
Wenqi Wei
Ling Liu
Margaret Loper
Ka-Ho Chow
Emre Gursoy
Stacey Truex
Yanzhao Wu
AAML
52
12
0
01 Oct 2019
Techniques for Adversarial Examples Threatening the Safety of Artificial
  Intelligence Based Systems
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems
Utku Kose
SILMAAML
29
2
0
29 Sep 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for
  Embedded Neural Networks
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAMLMQ
98
18
0
27 Sep 2019
Lower Bounds on Adversarial Robustness from Optimal Transport
Lower Bounds on Adversarial Robustness from Optimal Transport
A. Bhagoji
Daniel Cullina
Prateek Mittal
OODOTAAML
72
94
0
26 Sep 2019
A Closer Look at Domain Shift for Deep Learning in Histopathology
A Closer Look at Domain Shift for Deep Learning in Histopathology
Karin Stacke
Gabriel Eilertsen
Jonas Unger
Claes Lundström
OOD
66
62
0
25 Sep 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Tianyu Pang
Kun Xu
Jun Zhu
AAML
91
105
0
25 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
110
398
0
23 Sep 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained
  Environments
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
121
40
0
23 Sep 2019
HAWKEYE: Adversarial Example Detector for Deep Neural Networks
HAWKEYE: Adversarial Example Detector for Deep Neural Networks
Jinkyu Koo
Michael A. Roth
S. Bagchi
AAML
232
3
0
22 Sep 2019
Adversarial Learning with Margin-based Triplet Embedding Regularization
Adversarial Learning with Margin-based Triplet Embedding Regularization
Yaoyao Zhong
Weihong Deng
AAML
91
50
0
20 Sep 2019
Propagated Perturbation of Adversarial Attack for well-known CNNs:
  Empirical Study and its Explanation
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation
Jihyeun Yoon
Kyungyul Kim
Jongseong Jang
AAML
48
4
0
19 Sep 2019
Training Robust Deep Neural Networks via Adversarial Noise Propagation
Training Robust Deep Neural Networks via Adversarial Noise Propagation
Aishan Liu
Xianglong Liu
Chongzhi Zhang
Hang Yu
Qiang Liu
Dacheng Tao
AAML
86
116
0
19 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
79
680
0
17 Sep 2019
Generating Black-Box Adversarial Examples for Text Classifiers Using a
  Deep Reinforced Model
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model
Prashanth Vijayaraghavan
D. Roy
AAML
49
36
0
17 Sep 2019
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with
  Fewer Queries Using Particle Swarm Optimization
They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization
Rayan Mosli
M. Wright
Bo Yuan
Yin Pan
AAML
48
16
0
16 Sep 2019
Towards Quality Assurance of Software Product Lines with Adversarial
  Configurations
Towards Quality Assurance of Software Product Lines with Adversarial Configurations
Paul Temple
M. Acher
Gilles Perrouin
Battista Biggio
J. Jézéquel
Fabio Roli
AAML
41
11
0
16 Sep 2019
Interpreting and Improving Adversarial Robustness of Deep Neural
  Networks with Neuron Sensitivity
Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity
Chongzhi Zhang
Aishan Liu
Xianglong Liu
Yitao Xu
Hang Yu
Yuqing Ma
Tianlin Li
AAML
134
19
0
16 Sep 2019
Adversarial Attack on Skeleton-based Human Action Recognition
Adversarial Attack on Skeleton-based Human Action Recognition
Jian Liu
Naveed Akhtar
Ajmal Mian
AAML
67
68
0
14 Sep 2019
White-Box Adversarial Defense via Self-Supervised Data Estimation
White-Box Adversarial Defense via Self-Supervised Data Estimation
Zudi Lin
Hanspeter Pfister
Ziming Zhang
AAML
23
2
0
13 Sep 2019
Defending Against Adversarial Attacks by Suppressing the Largest
  Eigenvalue of Fisher Information Matrix
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix
Yaxin Peng
Chaomin Shen
Guixu Zhang
Jinsong Fan
AAML
44
13
0
13 Sep 2019
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained
  Autoencoders
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders
Pratik Vaishnavi
Kevin Eykholt
A. Prakash
Amir Rahmati
AAML
46
2
0
12 Sep 2019
An Empirical Investigation of Randomized Defenses against Adversarial
  Attacks
An Empirical Investigation of Randomized Defenses against Adversarial Attacks
Yannik Potdevin
Dirk Nowotka
Vijay Ganesh
AAML
49
4
0
12 Sep 2019
Inspecting adversarial examples using the Fisher information
Inspecting adversarial examples using the Fisher information
Jörg Martin
Clemens Elster
AAML
50
15
0
12 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
110
199
0
11 Sep 2019
PDA: Progressive Data Augmentation for General Robustness of Deep Neural
  Networks
PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks
Hang Yu
Aishan Liu
Xianglong Liu
Gen Li
Ping Luo
R. Cheng
Jichen Yang
Chongzhi Zhang
AAML
77
10
0
11 Sep 2019
Localized Adversarial Training for Increased Accuracy and Robustness in
  Image Classification
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification
Eitan Rothberg
Tingting Chen
Luo Jie
Hao Ji
AAML
23
0
0
10 Sep 2019
Effectiveness of Adversarial Examples and Defenses for Malware
  Classification
Effectiveness of Adversarial Examples and Defenses for Malware Classification
Robert Podschwadt
Hassan Takabi
AAML
52
11
0
10 Sep 2019
FDA: Feature Disruptive Attack
FDA: Feature Disruptive Attack
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
120
105
0
10 Sep 2019
Universal Physical Camouflage Attacks on Object Detectors
Universal Physical Camouflage Attacks on Object Detectors
Lifeng Huang
Chengying Gao
Yuyin Zhou
Cihang Xie
Alan Yuille
C. Zou
Ning Liu
AAML
182
169
0
10 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Adversarial Robustness Against the Union of Multiple Perturbation Models
Pratyush Maini
Eric Wong
J. Zico Kolter
OODAAML
65
151
0
09 Sep 2019
STA: Adversarial Attacks on Siamese Trackers
STA: Adversarial Attacks on Siamese Trackers
Xugang Wu
Xiaoping Wang
Xu Zhou
Songlei Jian
GANAAML
41
6
0
08 Sep 2019
On the Need for Topology-Aware Generative Models for Manifold-Based
  Defenses
On the Need for Topology-Aware Generative Models for Manifold-Based Defenses
Uyeong Jang
Susmit Jha
S. Jha
AAML
83
13
0
07 Sep 2019
Blackbox Attacks on Reinforcement Learning Agents Using Approximated
  Temporal Information
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information
Yiren Zhao
Ilia Shumailov
Han Cui
Xitong Gao
Robert D. Mullins
Ross J. Anderson
AAML
82
29
0
06 Sep 2019
Are Adversarial Robustness and Common Perturbation Robustness
  Independent Attributes ?
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
56
40
0
04 Sep 2019
Achieving Verified Robustness to Symbol Substitutions via Interval Bound
  Propagation
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
Po-Sen Huang
Robert Stanforth
Johannes Welbl
Chris Dyer
Dani Yogatama
Sven Gowal
Krishnamurthy Dvijotham
Pushmeet Kohli
AAML
112
166
0
03 Sep 2019
Metric Learning for Adversarial Robustness
Metric Learning for Adversarial Robustness
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
96
188
0
03 Sep 2019
Deep Neural Network Ensembles against Deception: Ensemble Diversity,
  Accuracy and Robustness
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness
Ling Liu
Wenqi Wei
Ka-Ho Chow
Margaret Loper
Emre Gursoy
Stacey Truex
Yanzhao Wu
UQCVAAMLFedML
77
60
0
29 Aug 2019
A Statistical Defense Approach for Detecting Adversarial Examples
A Statistical Defense Approach for Detecting Adversarial Examples
Alessandro Cennamo
Ido Freeman
A. Kummert
AAML
34
4
0
26 Aug 2019
advPattern: Physical-World Attacks on Deep Person Re-Identification via
  Adversarially Transformable Patterns
advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns
Peng Kuang
Siyan Zheng
Mengkai Song
Qian Wang
Alireza Rahimpour
Hairong Qi
AAMLOOD
76
59
0
25 Aug 2019
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve
  the Tower
Targeted Mismatch Adversarial Attack: Query with a Flower to Retrieve the Tower
Giorgos Tolias
Filip Radenovic
Ondřej Chum
AAML
77
71
0
24 Aug 2019
Previous
123...424344...545556
Next