ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,015 papers shown
Title
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural
  Networks against Adversarial Malware Samples
HashTran-DNN: A Framework for Enhancing Robustness of Deep Neural Networks against Adversarial Malware Samples
Deqiang Li
Ramesh Baral
Tao Li
Han Wang
Qianmu Li
Shouhuai Xu
AAML
63
21
0
18 Sep 2018
Defensive Dropout for Hardening Deep Neural Networks under Adversarial
  Attacks
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
Siyue Wang
Tianlin Li
Pu Zhao
Wujie Wen
David Kaeli
S. Chin
Xinyu Lin
AAML
76
70
0
13 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAMLMLAU
73
55
0
13 Sep 2018
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
68
235
0
13 Sep 2018
On the Structural Sensitivity of Deep Convolutional Networks to the
  Directions of Fourier Basis Functions
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Yusuke Tsuzuku
Issei Sato
AAML
84
62
0
11 Sep 2018
Certified Adversarial Robustness with Additive Noise
Certified Adversarial Robustness with Additive Noise
Bai Li
Changyou Chen
Wenlin Wang
Lawrence Carin
AAML
121
350
0
10 Sep 2018
The Curse of Concentration in Robust Learning: Evasion and Poisoning
  Attacks from Concentration of Measure
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Saeed Mahloujifar
Dimitrios I. Diochnos
Mohammad Mahmoody
85
152
0
09 Sep 2018
Training for Faster Adversarial Robustness Verification via Inducing
  ReLU Stability
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAMLOOD
74
202
0
09 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILMAAML
83
11
0
08 Sep 2018
Structure-Preserving Transformation: Generating Diverse and Transferable
  Adversarial Examples
Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples
Dan Peng
Zizhan Zheng
Xiaofeng Zhang
AAML
57
5
0
08 Sep 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
88
283
0
06 Sep 2018
Bridging machine learning and cryptography in defence against
  adversarial attacks
Bridging machine learning and cryptography in defence against adversarial attacks
O. Taran
Shideh Rezaeifar
Svyatoslav Voloshynovskiy
AAML
57
22
0
05 Sep 2018
DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided
  Fuzzing
DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided Fuzzing
Xiaofei Xie
Lei Ma
Felix Juefei Xu
Hongxu Chen
Minhui Xue
Yue Liu
Yang Liu
Jianjun Zhao
Jianxiong Yin
Simon See
116
41
0
04 Sep 2018
Adversarial Attacks on Node Embeddings via Graph Poisoning
Adversarial Attacks on Node Embeddings via Graph Poisoning
Aleksandar Bojchevski
Stephan Günnemann
AAML
89
307
0
04 Sep 2018
MULDEF: Multi-model-based Defense Against Adversarial Examples for
  Neural Networks
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks
Siwakorn Srisakaokul
Yuhao Zhang
Zexuan Zhong
Wei Yang
Tao Xie
Bo Li
AAML
87
19
0
31 Aug 2018
Guiding Deep Learning System Testing using Surprise Adequacy
Guiding Deep Learning System Testing using Surprise Adequacy
Jinhan Kim
R. Feldt
S. Yoo
AAMLELM
76
433
0
25 Aug 2018
Maximal Jacobian-based Saliency Map Attack
Maximal Jacobian-based Saliency Map Attack
R. Wiyatno
Anqi Xu
AAML
44
88
0
23 Aug 2018
Controlling Over-generalization and its Effect on Adversarial Examples
  Generation and Detection
Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection
Mahdieh Abbasi
Arezoo Rajabi
A. Mozafari
R. Bobba
Christian Gagné
AAML
74
9
0
21 Aug 2018
Reinforcement Learning for Autonomous Defence in Software-Defined
  Networking
Reinforcement Learning for Autonomous Defence in Software-Defined Networking
Yi Han
Benjamin I. P. Rubinstein
Tamas Abraham
T. Alpcan
O. Vel
S. Erfani
David Hubczenko
C. Leckie
Paul Montague
AAML
55
69
0
17 Aug 2018
Mitigation of Adversarial Attacks through Embedded Feature Selection
Mitigation of Adversarial Attacks through Embedded Feature Selection
Ziyi Bao
Luis Muñoz-González
Emil C. Lupu
AAML
44
1
0
16 Aug 2018
Adversarial Attacks Against Automatic Speech Recognition Systems via
  Psychoacoustic Hiding
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
89
291
0
16 Aug 2018
Distributionally Adversarial Attack
Distributionally Adversarial Attack
T. Zheng
Changyou Chen
K. Ren
OOD
101
123
0
16 Aug 2018
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning
  Detection
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection
Xiao Chen
Chaoran Li
Derui Wang
S. Wen
Jun Zhang
Surya Nepal
Yang Xiang
K. Ren
AAML
83
248
0
10 Aug 2018
Defense Against Adversarial Attacks with Saak Transform
Defense Against Adversarial Attacks with Saak Transform
Sibo Song
Yueru Chen
Ngai-Man Cheung
C.-C. Jay Kuo
69
24
0
06 Aug 2018
Gray-box Adversarial Training
Gray-box Adversarial Training
S. VivekB.
Konda Reddy Mopuri
R. Venkatesh Babu
AAML
61
35
0
06 Aug 2018
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
  Robustness of 18 Deep Image Classification Models
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D. Su
Huan Zhang
Hongge Chen
Jinfeng Yi
Pin-Yu Chen
Yupeng Gao
VLM
140
393
0
05 Aug 2018
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
126
162
0
05 Aug 2018
ATMPA: Attacking Machine Learning-based Malware Visualization Detection
  Methods via Adversarial Examples
ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples
Xinbo Liu
Jiliang Zhang
Yaping Lin
He Li
AAML
59
56
0
05 Aug 2018
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
OffRL
100
106
0
01 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
147
79
0
31 Jul 2018
Symbolic Execution for Deep Neural Networks
Symbolic Execution for Deep Neural Networks
D. Gopinath
Kaiyuan Wang
Mengshi Zhang
C. Păsăreanu
S. Khurshid
AAML
81
54
0
27 Jul 2018
Limitations of the Lipschitz constant as a defense against adversarial
  examples
Limitations of the Lipschitz constant as a defense against adversarial examples
Todd P. Huster
C. Chiang
R. Chadha
AAML
66
84
0
25 Jul 2018
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
Andrew Ilyas
Logan Engstrom
Aleksander Madry
MLAUAAML
110
375
0
20 Jul 2018
Physical Adversarial Examples for Object Detectors
Physical Adversarial Examples for Object Detectors
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Florian Tramèr
Atul Prakash
Tadayoshi Kohno
Basel Alomair
AAML
110
474
0
20 Jul 2018
Motivating the Rules of the Game for Adversarial Example Research
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
107
229
0
18 Jul 2018
Defend Deep Neural Networks Against Adversarial Examples via Fixed and
  Dynamic Quantized Activation Functions
Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions
Adnan Siraj Rakin
Jinfeng Yi
Boqing Gong
Deliang Fan
AAMLMQ
80
50
0
18 Jul 2018
Training Recurrent Neural Networks against Noisy Computations during
  Inference
Training Recurrent Neural Networks against Noisy Computations during Inference
Minghai Qin
D. Vučinić
30
6
0
17 Jul 2018
Online Robust Policy Learning in the Presence of Unknown Adversaries
Online Robust Policy Learning in the Presence of Unknown Adversaries
Aaron J. Havens
Zhanhong Jiang
Soumik Sarkar
AAML
120
44
0
16 Jul 2018
Query-Efficient Hard-label Black-box Attack:An Optimization-based
  Approach
Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach
Minhao Cheng
Thong Le
Pin-Yu Chen
Jinfeng Yi
Huan Zhang
Cho-Jui Hsieh
AAML
112
348
0
12 Jul 2018
A Game-Based Approximate Verification of Deep Neural Networks with
  Provable Guarantees
A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
Min Wu
Matthew Wicker
Wenjie Ruan
Xiaowei Huang
Marta Kwiatkowska
AAML
91
111
0
10 Jul 2018
Adaptive Adversarial Attack on Scene Text Recognition
Adaptive Adversarial Attack on Scene Text Recognition
Xiaoyong Yuan
Pan He
Xiaolin Li
Dapeng Oliver Wu
AAML
73
23
0
09 Jul 2018
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Dan Hendrycks
Thomas G. Dietterich
OOD
126
202
0
04 Jul 2018
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox v1.0.0
Maria-Irina Nicolae
M. Sinn
Minh-Ngoc Tran
Beat Buesser
Ambrish Rawat
...
Nathalie Baracaldo
Bryant Chen
Heiko Ludwig
Ian Molloy
Ben Edwards
AAMLVLM
91
463
0
03 Jul 2018
Adversarial Examples in Deep Learning: Characterization and Divergence
Adversarial Examples in Deep Learning: Characterization and Divergence
Wenqi Wei
Ling Liu
Margaret Loper
Stacey Truex
Lei Yu
Mehmet Emre Gursoy
Yanzhao Wu
AAMLSILM
119
18
0
29 Jun 2018
A New Angle on L2 Regularization
A New Angle on L2 Regularization
T. Tanay
Lewis D. Griffin
LLMSV
50
5
0
28 Jun 2018
Adversarial Reprogramming of Neural Networks
Adversarial Reprogramming of Neural Networks
Gamaleldin F. Elsayed
Ian Goodfellow
Jascha Narain Sohl-Dickstein
OODAAML
55
184
0
28 Jun 2018
Gradient Similarity: An Explainable Approach to Detect Adversarial
  Attacks against Deep Learning
Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning
J. Dhaliwal
S. Shintre
AAML
49
15
0
27 Jun 2018
Detection based Defense against Adversarial Examples from the
  Steganalysis Point of View
Detection based Defense against Adversarial Examples from the Steganalysis Point of View
Jiayang Liu
Weiming Zhang
Yiwei Zhang
Dongdong Hou
Yujia Liu
Hongyue Zha
Nenghai Yu
AAML
101
100
0
21 Jun 2018
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
Built-in Vulnerabilities to Imperceptible Adversarial Perturbations
T. Tanay
Jerone T. A. Andrews
Lewis D. Griffin
73
7
0
19 Jun 2018
Non-Negative Networks Against Adversarial Attacks
Non-Negative Networks Against Adversarial Attacks
William Fleshman
Edward Raff
Jared Sylvester
Steven Forsyth
Mark McLean
AAML
66
41
0
15 Jun 2018
Previous
123...757677...798081
Next