ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.08945
  4. Cited By
Robust Physical-World Attacks on Deep Learning Models

Robust Physical-World Attacks on Deep Learning Models

27 July 2017
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Bo-wen Li
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
D. Song
    AAML
ArXivPDFHTML

Papers citing "Robust Physical-World Attacks on Deep Learning Models"

50 / 123 papers shown
Title
CEB Improves Model Robustness
CEB Improves Model Robustness
Ian S. Fischer
Alexander A. Alemi
AAML
19
28
0
13 Feb 2020
Adversarial Filters of Dataset Biases
Adversarial Filters of Dataset Biases
Ronan Le Bras
Swabha Swayamdipta
Chandra Bhagavatula
Rowan Zellers
Matthew E. Peters
Ashish Sabharwal
Yejin Choi
36
220
0
10 Feb 2020
Safety Concerns and Mitigation Approaches Regarding the Use of Deep
  Learning in Safety-Critical Perception Tasks
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
28
80
0
22 Jan 2020
A Little Fog for a Large Turn
A Little Fog for a Large Turn
Harshitha Machiraju
V. Balasubramanian
AAML
15
9
0
16 Jan 2020
Exploring Adversarial Attack in Spiking Neural Networks with
  Spike-Compatible Gradient
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient
Ling Liang
Xing Hu
Lei Deng
Yujie Wu
Guoqi Li
Yufei Ding
Peng Li
Yuan Xie
AAML
26
61
0
01 Jan 2020
Automated Testing for Deep Learning Systems with Differential Behavior
  Criteria
Automated Testing for Deep Learning Systems with Differential Behavior Criteria
Yuan Gao
Yiqiang Han
17
2
0
31 Dec 2019
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
24
108
0
27 Dec 2019
Scratch that! An Evolution-based Adversarial Attack against Neural
  Networks
Scratch that! An Evolution-based Adversarial Attack against Neural Networks
Malhar Jere
Loris Rossi
Briland Hitaj
Gabriela F. Cretu-Ciocarlie
Giacomo Boracchi
F. Koushanfar
AAML
14
18
0
05 Dec 2019
Playing it Safe: Adversarial Robustness with an Abstain Option
Playing it Safe: Adversarial Robustness with an Abstain Option
Cassidy Laidlaw
S. Feizi
AAML
31
20
0
25 Nov 2019
Identifying Model Weakness with Adversarial Examiner
Identifying Model Weakness with Adversarial Examiner
Michelle Shu
Chenxi Liu
Weichao Qiu
Alan Yuille
AAML
ELM
27
19
0
25 Nov 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
18
142
0
06 Nov 2019
Label Smoothing and Logit Squeezing: A Replacement for Adversarial
  Training?
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
Ali Shafahi
Amin Ghiasi
Furong Huang
Tom Goldstein
AAML
27
40
0
25 Oct 2019
Streaming Networks: Enable A Robust Classification of Noise-Corrupted
  Images
Streaming Networks: Enable A Robust Classification of Noise-Corrupted Images
Junbao Zhou
Fumihiko Takahashi
19
3
0
23 Oct 2019
Attacking Optical Flow
Attacking Optical Flow
Anurag Ranjan
J. Janai
Andreas Geiger
Michael J. Black
AAML
3DPC
19
87
0
22 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
41
6,125
0
22 Oct 2019
Attacking Vision-based Perception in End-to-End Autonomous Driving
  Models
Attacking Vision-based Perception in End-to-End Autonomous Driving Models
Adith Boloor
Karthik Garimella
Xin He
C. Gill
Yevgeniy Vorobeychik
Xuan Zhang
AAML
19
106
0
02 Oct 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
437
0
26 Sep 2019
Towards Quality Assurance of Software Product Lines with Adversarial
  Configurations
Towards Quality Assurance of Software Product Lines with Adversarial Configurations
Paul Temple
M. Acher
Gilles Perrouin
Battista Biggio
J. Jézéquel
Fabio Roli
AAML
22
11
0
16 Sep 2019
Universal Physical Camouflage Attacks on Object Detectors
Universal Physical Camouflage Attacks on Object Detectors
Lifeng Huang
Chengying Gao
Yuyin Zhou
Cihang Xie
Alan Yuille
C. Zou
Ning Liu
AAML
143
162
0
10 Sep 2019
On the Design of Black-box Adversarial Examples by Leveraging
  Gradient-free Optimization and Operator Splitting Method
On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method
Pu Zhao
Sijia Liu
Pin-Yu Chen
Nghia Hoang
Kaidi Xu
B. Kailkhura
Xue Lin
AAML
32
54
0
26 Jul 2019
Structure-Invariant Testing for Machine Translation
Structure-Invariant Testing for Machine Translation
Pinjia He
Clara Meister
Z. Su
27
104
0
19 Jul 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
24
104
0
25 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
22
76
0
10 Jun 2019
Improving the Robustness of Deep Neural Networks via Adversarial
  Training with Triplet Loss
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
Pengcheng Li
Jinfeng Yi
Bowen Zhou
Lijun Zhang
AAML
37
36
0
28 May 2019
Fooling Detection Alone is Not Enough: First Adversarial Attack against
  Multiple Object Tracking
Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking
Yunhan Jia
Yantao Lu
Junjie Shen
Qi Alfred Chen
Zhenyu Zhong
Tao Wei
AAML
VOT
13
33
0
27 May 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via
  Genetic Algorithm
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm
Jinyin Chen
Mengmeng Su
Shijing Shen
Hui Xiong
Haibin Zheng
AAML
22
67
0
01 May 2019
Fooling automated surveillance cameras: adversarial patches to attack
  person detection
Fooling automated surveillance cameras: adversarial patches to attack person detection
Simen Thys
W. V. Ranst
Toon Goedemé
AAML
49
565
0
18 Apr 2019
Adversarial camera stickers: A physical camera-based attack on deep
  learning systems
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Juncheng Billy Li
Frank R. Schmidt
J. Zico Kolter
AAML
11
164
0
21 Mar 2019
GRIP: Generative Robust Inference and Perception for Semantic Robot
  Manipulation in Adversarial Environments
GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments
Xiaotong Chen
Rui Chen
Zhiqiang Sui
Zhefan Ye
Yanqi Liu
R. I. Bahar
Odest Chadwicke Jenkins
24
23
0
20 Mar 2019
Simple Physical Adversarial Examples against End-to-End Autonomous
  Driving Models
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models
Adith Boloor
Xin He
C. Gill
Yevgeniy Vorobeychik
Xuan Zhang
AAML
21
74
0
12 Mar 2019
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing
  AutoEncoder Pre-training
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training
S. Kokalj-Filipovic
Rob Miller
Nicholas Chang
Chi Leung Lau
AAML
22
36
0
16 Feb 2019
Weighted-Sampling Audio Adversarial Example Attack
Weighted-Sampling Audio Adversarial Example Attack
Xiaolei Liu
Xiaosong Zhang
Kun Wan
Qingxin Zhu
Yufei Ding
DiffM
AAML
36
36
0
26 Jan 2019
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
22
117
0
07 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
21
92
0
01 Dec 2018
A Spectral View of Adversarially Robust Features
A Spectral View of Adversarially Robust Features
Shivam Garg
Vatsal Sharan
B. Zhang
Gregory Valiant
AAML
22
21
0
15 Nov 2018
Exploring Connections Between Active Learning and Model Extraction
Exploring Connections Between Active Learning and Model Extraction
Varun Chandrasekaran
Kamalika Chaudhuri
Irene Giacomelli
Shane Walker
Songbai Yan
MIACV
14
157
0
05 Nov 2018
MeshAdv: Adversarial Meshes for Visual Recognition
MeshAdv: Adversarial Meshes for Visual Recognition
Chaowei Xiao
Dawei Yang
Bo-wen Li
Jia Deng
M. Liu
AAML
32
25
0
11 Oct 2018
Characterizing Adversarial Examples Based on Spatial Consistency
  Information for Semantic Segmentation
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Chaowei Xiao
Ruizhi Deng
Bo-wen Li
Feng Yu
M. Liu
D. Song
AAML
19
99
0
11 Oct 2018
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural
  Network
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
Xuanqing Liu
Yao Li
Chongruo Wu
Cho-Jui Hsieh
AAML
OOD
24
171
0
01 Oct 2018
Vision-based Navigation of Autonomous Vehicle in Roadway Environments
  with Unexpected Hazards
Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards
Mhafuzul Islam
M. Chowdhury
Hongda Li
Hongxin Hu
AAML
16
12
0
27 Sep 2018
Generating 3D Adversarial Point Clouds
Generating 3D Adversarial Point Clouds
Chong Xiang
C. Qi
Bo-wen Li
3DPC
24
286
0
19 Sep 2018
Robust Adversarial Perturbation on Deep Proposal-based Models
Robust Adversarial Perturbation on Deep Proposal-based Models
Yuezun Li
Dan Tian
Ming-Ching Chang
Xiao Bian
Siwei Lyu
AAML
14
105
0
16 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAML
MLAU
21
54
0
13 Sep 2018
Are You Tampering With My Data?
Are You Tampering With My Data?
Michele Alberti
Vinaychandran Pondenkandath
Marcel Würsch
Manuel Bouillon
Mathias Seuret
Rolf Ingold
Marcus Liwicki
AAML
37
19
0
21 Aug 2018
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically
  Differentiable Renderer
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
Hsueh-Ti Derek Liu
Michael Tao
Chun-Liang Li
Derek Nowrouzezahrai
Alec Jacobson
AAML
42
13
0
08 Aug 2018
Harmonic Adversarial Attack Method
Harmonic Adversarial Attack Method
Wen Heng
Shuchang Zhou
Tingting Jiang
AAML
22
6
0
18 Jul 2018
Motivating the Rules of the Game for Adversarial Example Research
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
50
226
0
18 Jul 2018
Experimental Resilience Assessment of An Open-Source Driving Agent
Experimental Resilience Assessment of An Open-Source Driving Agent
A. Rubaiyat
Yongming Qin
H. Alemzadeh
27
44
0
17 Jul 2018
Previous
123
Next