ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.09665
  4. Cited By
Adversarial Patch

Adversarial Patch

27 December 2017
Tom B. Brown
Dandelion Mané
Aurko Roy
Martín Abadi
Justin Gilmer
    AAML
ArXivPDFHTML

Papers citing "Adversarial Patch"

50 / 241 papers shown
Title
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Visually Imperceptible Adversarial Patch Attacks on Digital Images
Yaguan Qian
Jiamin Wang
Bin Wang
Xiang Ling
Zhaoquan Gu
Chunming Wu
Wassim Swaileh
AAML
39
2
0
02 Dec 2020
A Study on the Uncertainty of Convolutional Layers in Deep Neural
  Networks
A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks
Hao Shen
Sihong Chen
Ran Wang
30
5
0
27 Nov 2020
Invisible Perturbations: Physical Adversarial Examples Exploiting the
  Rolling Shutter Effect
Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Athena Sayles
Ashish Hooda
M. Gupta
Rahul Chatterjee
Earlence Fernandes
AAML
19
76
0
26 Nov 2020
Adversarial Attack on Facial Recognition using Visible Light
Adversarial Attack on Facial Recognition using Visible Light
Morgan Frearson
Kien Nguyen
AAML
21
7
0
25 Nov 2020
Adversarial Attacks on Optimization based Planners
Adversarial Attacks on Optimization based Planners
Sai H. Vemprala
Ashish Kapoor
AAML
29
12
0
30 Oct 2020
Dynamic Adversarial Patch for Evading Object Detection Models
Dynamic Adversarial Patch for Evading Object Detection Models
Shahar Hoory
T. Shapira
A. Shabtai
Yuval Elovici
AAML
18
40
0
25 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
680
0
19 Oct 2020
Double Targeted Universal Adversarial Perturbations
Double Targeted Universal Adversarial Perturbations
Philipp Benz
Chaoning Zhang
Tooba Imtiaz
In So Kweon
AAML
40
48
0
07 Oct 2020
Generating Adversarial yet Inconspicuous Patches with a Single Image
Generating Adversarial yet Inconspicuous Patches with a Single Image
Jinqi Luo
Tao Bai
Jun Zhao
AAML
27
6
0
21 Sep 2020
MultAV: Multiplicative Adversarial Videos
MultAV: Multiplicative Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
26
8
0
17 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
41
62
0
11 Sep 2020
Defending Against Multiple and Unforeseen Adversarial Videos
Defending Against Multiple and Unforeseen Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
31
23
0
11 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
156
0
08 Sep 2020
Adversarial Patch Camouflage against Aerial Detection
Adversarial Patch Camouflage against Aerial Detection
Ajaya Adhikari
R. D. Hollander
I. Tolios
M. V. Bekkum
Anneloes M. Bal
...
Dennis Gross
N. Jansen
Guillermo A. Pérez
Kit Buurman
S. Raaijmakers
AAML
29
43
0
31 Aug 2020
AP-Loss for Accurate One-Stage Object Detection
AP-Loss for Accurate One-Stage Object Detection
Kean Chen
Weiyao Lin
Jianguo Li
John See
Ji Wang
Junni Zou
ObjD
22
66
0
17 Aug 2020
A Survey on Security Attacks and Defense Techniques for Connected and
  Autonomous Vehicles
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles
M. Pham
Kaiqi Xiong
25
138
0
16 Jul 2020
SLAP: Improving Physical Adversarial Examples with Short-Lived
  Adversarial Perturbations
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
Giulio Lovisotto
H.C.M. Turner
Ivo Sluganovic
Martin Strohmeier
Ivan Martinovic
AAML
19
101
0
08 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
17
38
0
01 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
27
66
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
19
91
0
05 May 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
Certifiable Robustness to Adversarial State Uncertainty in Deep
  Reinforcement Learning
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Michael Everett
Bjorn Lutjens
Jonathan P. How
AAML
13
41
0
11 Apr 2020
ObjectNet Dataset: Reanalysis and Correction
ObjectNet Dataset: Reanalysis and Correction
Ali Borji
3DPC
13
11
0
04 Apr 2020
Generating Socially Acceptable Perturbations for Efficient Evaluation of
  Autonomous Vehicles
Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles
Songan Zhang
H. Peng
S. Nageshrao
E. Tseng
AAML
27
5
0
18 Mar 2020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural
  Styles
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
167
224
0
08 Mar 2020
Machine Learning in Python: Main developments and technology trends in
  data science, machine learning, and artificial intelligence
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
S. Raschka
Joshua Patterson
Corey J. Nolet
AI4CE
24
484
0
12 Feb 2020
Attacking Optical Character Recognition (OCR) Systems with Adversarial
  Watermarks
Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks
Lu Chen
Wenyuan Xu
AAML
21
21
0
08 Feb 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
A Little Fog for a Large Turn
A Little Fog for a Large Turn
Harshitha Machiraju
V. Balasubramanian
AAML
15
9
0
16 Jan 2020
Exploring Adversarial Attack in Spiking Neural Networks with
  Spike-Compatible Gradient
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient
Ling Liang
Xing Hu
Lei Deng
Yujie Wu
Guoqi Li
Yufei Ding
Peng Li
Yuan Xie
AAML
22
60
0
01 Jan 2020
Design and Interpretation of Universal Adversarial Patches in Face
  Detection
Design and Interpretation of Universal Adversarial Patches in Face Detection
Xiao Yang
Fangyun Wei
Hongyang R. Zhang
Jun Zhu
AAML
CVBM
52
43
0
30 Nov 2019
Fine-grained Synthesis of Unrestricted Adversarial Examples
Fine-grained Synthesis of Unrestricted Adversarial Examples
Omid Poursaeed
Tianxing Jiang
Yordanos Goshu
Harry Yang
Serge J. Belongie
Ser-Nam Lim
AAML
37
13
0
20 Nov 2019
Generate (non-software) Bugs to Fool Classifiers
Generate (non-software) Bugs to Fool Classifiers
Hiromu Yakura
Youhei Akimoto
Jun Sakuma
AAML
25
10
0
20 Nov 2019
Simple iterative method for generating targeted universal adversarial
  perturbations
Simple iterative method for generating targeted universal adversarial perturbations
Hokuto Hirano
Kazuhiro Takemoto
AAML
27
30
0
15 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
18
104
0
13 Nov 2019
Imperceptible Adversarial Attacks on Tabular Data
Imperceptible Adversarial Attacks on Tabular Data
Vincent Ballet
X. Renard
Jonathan Aigrain
Thibault Laugel
P. Frossard
Marcin Detyniecki
12
72
0
08 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
37
68
0
06 Nov 2019
Feature relevance quantification in explainable AI: A causal problem
Feature relevance quantification in explainable AI: A causal problem
Dominik Janzing
Lenon Minorics
Patrick Blobaum
FAtt
CML
13
278
0
29 Oct 2019
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
14
3
0
27 Oct 2019
Attacking Optical Flow
Attacking Optical Flow
Anurag Ranjan
J. Janai
Andreas Geiger
Michael J. Black
AAML
3DPC
16
87
0
22 Oct 2019
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Martin Pawelczyk
Johannes Haug
Klaus Broelemann
Gjergji Kasneci
OOD
CML
33
199
0
21 Oct 2019
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms
Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
FAtt
29
80
0
16 Oct 2019
Defending Neural Backdoors via Generative Distribution Modeling
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
21
183
0
10 Oct 2019
Role of Spatial Context in Adversarial Robustness for Object Detection
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha
Akshayvarun Subramanya
Koninika Patil
Hamed Pirsiavash
ObjD
AAML
32
53
0
30 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
22
125
0
20 Sep 2019
Towards Quality Assurance of Software Product Lines with Adversarial
  Configurations
Towards Quality Assurance of Software Product Lines with Adversarial Configurations
Paul Temple
M. Acher
Gilles Perrouin
Battista Biggio
J. Jézéquel
Fabio Roli
AAML
16
11
0
16 Sep 2019
Universal Physical Camouflage Attacks on Object Detectors
Universal Physical Camouflage Attacks on Object Detectors
Lifeng Huang
Chengying Gao
Yuyin Zhou
Cihang Xie
Alan Yuille
C. Zou
Ning Liu
AAML
143
162
0
10 Sep 2019
Previous
12345
Next