ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,016 papers shown
Title
Adversarial Image Translation: Unrestricted Adversarial Examples in Face
  Recognition Systems
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems
Kazuya Kakizaki
Kosuke Yoshida
AAMLCVBM
79
19
0
09 May 2019
Representation of White- and Black-Box Adversarial Examples in Deep
  Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study
Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study
Chihye Han
Wonjun Yoon
Gihyun Kwon
S. Nam
Dae-Shik Kim
AAML
53
5
0
07 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
143
1,847
0
06 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
79
26
0
05 May 2019
Transfer of Adversarial Robustness Between Perturbation Types
Transfer of Adversarial Robustness Between Perturbation Types
Daniel Kang
Yi Sun
Tom B. Brown
Dan Hendrycks
Jacob Steinhardt
AAML
73
49
0
03 May 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
132
362
0
02 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
101
245
0
01 May 2019
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via
  Genetic Algorithm
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm
Jinyin Chen
Mengmeng Su
Shijing Shen
Hui Xiong
Haibin Zheng
AAML
124
68
0
01 May 2019
Dropping Pixels for Adversarial Robustness
Dropping Pixels for Adversarial Robustness
Hossein Hosseini
Sreeram Kannan
Radha Poovendran
54
16
0
01 May 2019
Test Selection for Deep Learning Systems
Test Selection for Deep Learning Systems
Wei Ma
Mike Papadakis
Anestis Tsakmalis
Maxime Cordy
Yves Le Traon
OOD
73
93
0
30 Apr 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAMLSILM
122
380
0
30 Apr 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
141
1,255
0
29 Apr 2019
Data Poisoning Attack against Knowledge Graph Embedding
Data Poisoning Attack against Knowledge Graph Embedding
Hengtong Zhang
T. Zheng
Jing Gao
Chenglin Miao
Lu Su
Yaliang Li
K. Ren
KELM
71
84
0
26 Apr 2019
Robustness Verification of Support Vector Machines
Robustness Verification of Support Vector Machines
Francesco Ranzato
Marco Zanella
AAML
84
18
0
26 Apr 2019
Physical Adversarial Textures that Fool Visual Object Tracking
Physical Adversarial Textures that Fool Visual Object Tracking
R. Wiyatno
Anqi Xu
AAML
95
74
0
24 Apr 2019
A Robust Approach for Securing Audio Classification Against Adversarial
  Attacks
A Robust Approach for Securing Audio Classification Against Adversarial Attacks
Mohammad Esmaeilpour
P. Cardinal
Alessandro Lameiras Koerich
AAML
74
71
0
24 Apr 2019
Fooling automated surveillance cameras: adversarial patches to attack
  person detection
Fooling automated surveillance cameras: adversarial patches to attack person detection
Simen Thys
W. V. Ranst
Toon Goedemé
AAML
115
573
0
18 Apr 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
89
75
0
18 Apr 2019
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for
  Neural Networks
ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
50
18
0
17 Apr 2019
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep
  Classifiers
Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers
Ameya Joshi
Amitangshu Mukherjee
Soumik Sarkar
Chinmay Hegde
AAML
99
101
0
17 Apr 2019
Adversarial Defense Through Network Profiling Based Path Extraction
Adversarial Defense Through Network Profiling Based Path Extraction
Yuxian Qiu
Jingwen Leng
Cong Guo
Quan Chen
Chong Li
Minyi Guo
Yuhao Zhu
AAML
69
51
0
17 Apr 2019
Reducing Adversarial Example Transferability Using Gradient
  Regularization
Reducing Adversarial Example Transferability Using Gradient Regularization
George Adam
P. Smirnov
B. Haibe-Kains
Anna Goldenberg
AAML
81
4
0
16 Apr 2019
Detecting the Unexpected via Image Resynthesis
Detecting the Unexpected via Image Resynthesis
Krzysztof Lis
Krishna Kanth Nakka
Pascal Fua
Mathieu Salzmann
UQCV
87
178
0
16 Apr 2019
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural
  Networks for Steering Angle Prediction
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction
Alesia Chernikova
Alina Oprea
Cristina Nita-Rotaru
Baekgyu Kim
AAML
51
72
0
15 Apr 2019
Unrestricted Adversarial Examples via Semantic Manipulation
Unrestricted Adversarial Examples via Semantic Manipulation
Anand Bhattad
Min Jin Chong
Kaizhao Liang
Yangqiu Song
David A. Forsyth
AAML
87
153
0
12 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
74
35
0
12 Apr 2019
Black-box Adversarial Attacks on Video Recognition Models
Black-box Adversarial Attacks on Video Recognition Models
Linxi Jiang
Xingjun Ma
Shaoxiang Chen
James Bailey
Yu-Gang Jiang
AAMLMLAU
86
150
0
10 Apr 2019
Efficient Decision-based Black-box Adversarial Attacks on Face
  Recognition
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
Yinpeng Dong
Hang Su
Baoyuan Wu
Zhifeng Li
Wen Liu
Tong Zhang
Jun Zhu
CVBMAAML
83
409
0
09 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILMAAML
134
46
0
08 Apr 2019
Adversarial Audio: A New Information Hiding Method and Backdoor for
  DNN-based Speech Recognition Models
Adversarial Audio: A New Information Hiding Method and Backdoor for DNN-based Speech Recognition Models
Yehao Kong
Jiliang Zhang
52
28
0
08 Apr 2019
Malware Evasion Attack and Defense
Malware Evasion Attack and Defense
Yonghong Huang
Utkarsh Verma
Celeste Fralick
G. Infante-Lopez
B. Kumar
Carl Woodward
AAML
65
16
0
07 Apr 2019
On Training Robust PDF Malware Classifiers
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
99
69
0
06 Apr 2019
Evading Defenses to Transferable Adversarial Examples by
  Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILMAAML
99
860
0
05 Apr 2019
Minimum Uncertainty Based Detection of Adversaries in Deep Neural
  Networks
Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks
Fatemeh Sheikholeslami
Swayambhoo Jain
G. Giannakis
AAML
67
25
0
05 Apr 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
123
672
0
03 Apr 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
Xinyu Lin
AAMLFAtt
147
43
0
03 Apr 2019
Adversarial Attacks against Deep Saliency Models
Adversarial Attacks against Deep Saliency Models
Zhaohui Che
Ali Borji
Guangtao Zhai
Suiyi Ling
G. Guo
P. Le Callet
AAML
49
4
0
02 Apr 2019
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online
  Learning
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
FedMLAAMLMIACV
96
258
0
01 Apr 2019
Robustness of 3D Deep Learning in an Adversarial Setting
Robustness of 3D Deep Learning in an Adversarial Setting
Matthew Wicker
Marta Kwiatkowska
3DPC
82
99
0
01 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
64
152
0
01 Apr 2019
Defending against adversarial attacks by randomized diversification
Defending against adversarial attacks by randomized diversification
O. Taran
Shideh Rezaeifar
T. Holotyak
Svyatoslav Voloshynovskiy
AAML
82
39
0
01 Apr 2019
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
Xiao Zhang
Dongrui Wu
AAML
73
83
0
31 Mar 2019
BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks
BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks
Huili Chen
B. Rouhani
F. Koushanfar
68
52
0
31 Mar 2019
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
Shaokai Ye
Kaidi Xu
Sijia Liu
Jan-Henrik Lambrechts
Huan Zhang
Aojun Zhou
Kaisheng Ma
Yanzhi Wang
Xue Lin
AAML
114
165
0
29 Mar 2019
Rallying Adversarial Techniques against Deep Learning for Network
  Security
Rallying Adversarial Techniques against Deep Learning for Network Security
Joseph Clements
Yuzhe Yang
Ankur A Sharma
Hongxin Hu
Yingjie Lao
AAML
80
52
0
27 Mar 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
84
40
0
27 Mar 2019
Text Processing Like Humans Do: Visually Attacking and Shielding NLP
  Systems
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems
Steffen Eger
Gözde Gül Sahin
Andreas Rucklé
Ji-Ung Lee
Claudia Schulz
Mohsen Mesgar
Krishnkant Swarnkar
Edwin Simpson
Iryna Gurevych
AAML
122
163
0
27 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
68
31
0
27 Mar 2019
On the Adversarial Robustness of Multivariate Robust Estimation
On the Adversarial Robustness of Multivariate Robust Estimation
Erhan Bayraktar
Lifeng Lai
25
3
0
27 Mar 2019
A geometry-inspired decision-based attack
A geometry-inspired decision-based attack
Yujia Liu
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
77
54
0
26 Mar 2019
Previous
123...707172...798081
Next