ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02533
  4. Cited By
Adversarial examples in the physical world
v1v2v3v4 (latest)

Adversarial examples in the physical world

8 July 2016
Alexey Kurakin
Ian Goodfellow
Samy Bengio
    SILMAAML
ArXiv (abs)PDFHTML

Papers citing "Adversarial examples in the physical world"

50 / 2,769 papers shown
Title
Evaluation of Adversarial Training on Different Types of Neural Networks
  in Deep Learning-based IDSs
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs
Rana Abou-Khamis
Ashraf Matrawy
AAML
83
47
0
08 Jul 2020
On the relationship between class selectivity, dimensionality, and
  robustness
On the relationship between class selectivity, dimensionality, and robustness
Matthew L. Leavitt
Ari S. Morcos
58
6
0
08 Jul 2020
Quaternion Capsule Networks
Quaternion Capsule Networks
B. Özcan
Furkan Kinli
Mustafa Furkan Kıraç
3DPC
36
7
0
08 Jul 2020
SLAP: Improving Physical Adversarial Examples with Short-Lived
  Adversarial Perturbations
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
Giulio Lovisotto
H.C.M. Turner
Ivo Sluganovic
Martin Strohmeier
Ivan Martinovic
AAML
84
104
0
08 Jul 2020
RobFR: Benchmarking Adversarial Robustness on Face Recognition
RobFR: Benchmarking Adversarial Robustness on Face Recognition
Xiao Yang
Dingcheng Yang
Yinpeng Dong
Hang Su
Wenjian Yu
Jun Zhu
AAML
130
14
0
08 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLaAAML
89
58
0
08 Jul 2020
Making Adversarial Examples More Transferable and Indistinguishable
Making Adversarial Examples More Transferable and Indistinguishable
Junhua Zou
Yexin Duan
Xin Liu
Junyang Qiu
Yu Pan
Zhisong Pan
AAML
75
32
0
08 Jul 2020
Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples
  While Maintaining Model-to-model Transferability
Regional Image Perturbation Reduces LpL_pLp​ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Utku Ozbulak
Jonathan Peck
W. D. Neve
Bart Goossens
Yvan Saeys
Arnout Van Messem
AAML
34
2
0
07 Jul 2020
On Data Augmentation and Adversarial Risk: An Empirical Analysis
On Data Augmentation and Adversarial Risk: An Empirical Analysis
Hamid Eghbalzadeh
Khaled Koutini
Paul Primus
Verena Haunschmid
Michal Lewandowski
Werner Zellinger
Bernhard A. Moser
Gerhard Widmer
AAML
42
9
0
06 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
89
12
0
05 Jul 2020
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers
Yuting Liang
Reza Samavi
AAML
37
2
0
03 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
Lei Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILMAAML
80
3
0
02 Jul 2020
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk
  Assessment
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
Xabier Echeberria-Barrio
Amaia Gil-Lerchundi
Ines Goicoechea-Telleria
Raul Orduna Urrutia
AAML
74
5
0
02 Jul 2020
Query-Free Adversarial Transfer via Undertrained Surrogates
Query-Free Adversarial Transfer via Undertrained Surrogates
Chris Miller
Soroush Vosoughi
AAML
37
0
0
01 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
98
39
0
01 Jul 2020
Determining Sequence of Image Processing Technique (IPT) to Detect
  Adversarial Attacks
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks
Kishor Datta Gupta
Zahid Akhtar
D. Dasgupta
AAML
70
10
0
01 Jul 2020
Neural Network Virtual Sensors for Fuel Injection Quantities with
  Provable Performance Specifications
Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications
Eric Wong
Tim Schneider
Joerg Schmitt
Frank R. Schmidt
J. Zico Kolter
AAML
72
8
0
30 Jun 2020
Generating Adversarial Examples with an Optimized Quality
Generating Adversarial Examples with an Optimized Quality
Aminollah Khormali
Daehun Nyang
David A. Mohaisen
AAML
50
1
0
30 Jun 2020
Biologically Inspired Mechanisms for Adversarial Robustness
Biologically Inspired Mechanisms for Adversarial Robustness
M. V. Reddy
Andrzej Banburski
Nishka Pant
T. Poggio
AAML
70
46
0
29 Jun 2020
Geometry-Inspired Top-k Adversarial Perturbations
Geometry-Inspired Top-k Adversarial Perturbations
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
83
10
0
28 Jun 2020
Learning Goals from Failure
Learning Goals from Failure
Dave Epstein
Carl Vondrick
27
3
0
28 Jun 2020
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based
  IIoT Applications
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications
Yunfei Song
Tian Liu
Tongquan Wei
Xiangfeng Wang
Zhe Tao
Mingsong Chen
108
50
0
28 Jun 2020
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
Kaidi Jin
Tianwei Zhang
Chao Shen
Yufei Chen
Ming Fan
Chenhao Lin
Ting Liu
AAML
43
14
0
26 Jun 2020
Orthogonal Deep Models As Defense Against Black-Box Attacks
Orthogonal Deep Models As Defense Against Black-Box Attacks
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Mian
AAML
47
11
0
26 Jun 2020
Blacklight: Scalable Defense for Neural Networks against Query-Based
  Black-Box Attacks
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
Huiying Li
Shawn Shan
Emily Wenger
Jiayun Zhang
Haitao Zheng
Ben Y. Zhao
AAML
85
45
0
24 Jun 2020
Bit Error Robustness for Energy-Efficient DNN Accelerators
Bit Error Robustness for Energy-Efficient DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
MQ
54
1
0
24 Jun 2020
Sparse-RS: a versatile framework for query-efficient sparse black-box
  adversarial attacks
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Francesco Croce
Maksym Andriushchenko
Naman D. Singh
Nicolas Flammarion
Matthias Hein
105
101
0
23 Jun 2020
RayS: A Ray Searching Method for Hard-label Adversarial Attack
RayS: A Ray Searching Method for Hard-label Adversarial Attack
Jinghui Chen
Quanquan Gu
AAML
85
139
0
23 Jun 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACVAAML
111
103
0
23 Jun 2020
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Cassidy Laidlaw
Sahil Singla
Soheil Feizi
AAMLOOD
121
189
0
22 Jun 2020
Slimming Neural Networks using Adaptive Connectivity Scores
Slimming Neural Networks using Adaptive Connectivity Scores
Madan Ravi Ganesh
Dawsin Blanchard
Jason J. Corso
Salimeh Yasaei Sekeh
61
11
0
22 Jun 2020
Learning to Generate Noise for Multi-Attack Robustness
Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan
Jinwoo Shin
Sung Ju Hwang
NoLaAAML
145
25
0
22 Jun 2020
Interpretation of 3D CNNs for Brain MRI Data Classification
Interpretation of 3D CNNs for Brain MRI Data Classification
M. Kan
Ruslan Aliev́
A. Rudenko
Nikita Drobyshev
Nikita Petrashen
E. Kondrateva
M. Sharaev
A. Bernstein
Evgeny Burnaev
DiffM
46
0
0
20 Jun 2020
How do SGD hyperparameters in natural training affect adversarial
  robustness?
How do SGD hyperparameters in natural training affect adversarial robustness?
Sandesh Kamath
Amit Deshpande
K. Subrahmanyam
AAML
41
3
0
20 Jun 2020
Local Convolutions Cause an Implicit Bias towards High Frequency
  Adversarial Examples
Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples
J. O. Caro
Yilong Ju
Ryan Pyle
Sourav Dey
Wieland Brendel
Fabio Anselmi
Ankit B. Patel
AAML
79
11
0
19 Jun 2020
Adversarial Attacks for Multi-view Deep Models
Adversarial Attacks for Multi-view Deep Models
Xuli Sun
Shiliang Sun
AAML
31
0
0
19 Jun 2020
Beware the Black-Box: on the Robustness of Recent Defenses to
  Adversarial Examples
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Kaleel Mahmood
Deniz Gurevin
Marten van Dijk
Phuong Ha Nguyen
AAML
90
24
0
18 Jun 2020
PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
Haitham Khedr
James Ferlez
Yasser Shoukry
AAML
62
5
0
18 Jun 2020
Local Competition and Uncertainty for Adversarial Robustness in Deep
  Learning
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning
Antonios Alexos
Konstantinos P. Panousis
S. Chatzis
OODAAML
35
3
0
18 Jun 2020
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
  Training
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training
Eran Segalis
Eran Galili
69
17
0
17 Jun 2020
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders
Adversarial Examples Detection and Analysis with Layer-wise Autoencoders
Bartosz Wójcik
P. Morawiecki
Marek Śmieja
Tomasz Krzy.zek
Przemysław Spurek
Jacek Tabor
GAN
67
13
0
17 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
188
608
0
16 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Helen Zhou
AAML
73
190
0
15 Jun 2020
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural
  Networks
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
Sarada Krithivasan
Sanchari Sen
A. Raghunathan
AAML
37
1
0
14 Jun 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate Computing
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
88
38
0
13 Jun 2020
Adversarial Self-Supervised Contrastive Learning
Adversarial Self-Supervised Contrastive Learning
Minseon Kim
Jihoon Tack
Sung Ju Hwang
SSL
99
251
0
13 Jun 2020
Towards Robust Pattern Recognition: A Review
Towards Robust Pattern Recognition: A Review
Xu-Yao Zhang
Cheng-Lin Liu
C. Suen
OODHAI
69
110
0
12 Jun 2020
Protecting Against Image Translation Deepfakes by Leaking Universal
  Perturbations from Black-Box Neural Networks
Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks
Nataniel Ruiz
Sarah Adel Bargal
Stan Sclaroff
AAML
63
11
0
11 Jun 2020
Towards Robust Fine-grained Recognition by Maximal Separation of
  Discriminative Features
Towards Robust Fine-grained Recognition by Maximal Separation of Discriminative Features
Krishna Kanth Nakka
Mathieu Salzmann
AAML
41
6
0
10 Jun 2020
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
Jun Shu
Qian Zhao
Zengben Xu
Deyu Meng
NoLa
98
31
0
10 Jun 2020
Previous
123...353637...545556
Next