ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,015 papers shown
Title
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural
  Networks
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
Pu Zhao
Sijia Liu
Yanzhi Wang
Xinyu Lin
AAML
72
37
0
09 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
104
768
0
01 Apr 2018
Adversarial Attacks and Defences Competition
Adversarial Attacks and Defences Competition
Alexey Kurakin
Ian Goodfellow
Samy Bengio
Yinpeng Dong
Fangzhou Liao
...
Junjiajia Long
Yerkebulan Berdibekov
Takuya Akiba
Seiya Tokui
Motoki Abe
AAMLSILM
100
323
0
31 Mar 2018
Learning to Anonymize Faces for Privacy Preserving Action Detection
Learning to Anonymize Faces for Privacy Preserving Action Detection
Zhongzheng Ren
Yong Jae Lee
Michael S. Ryoo
CVBMPICV
153
205
0
30 Mar 2018
The Effects of JPEG and JPEG2000 Compression on Attacks using
  Adversarial Examples
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples
Ayse Elvan Aydemir
A. Temi̇zel
T. Taşkaya-Temizel
AAML
63
32
0
28 Mar 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing
  the Subspaces of Adversarial Examples
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
66
26
0
26 Mar 2018
Clipping free attacks against artificial neural networks
Clipping free attacks against artificial neural networks
B. Addad
Jérôme Kodjabachian
Christophe Meyer
AAML
33
1
0
26 Mar 2018
An Overview of Vulnerabilities of Voice Controlled Systems
An Overview of Vulnerabilities of Voice Controlled Systems
Yuan Gong
C. Poellabauer
51
32
0
24 Mar 2018
Improving DNN Robustness to Adversarial Attacks using Jacobian
  Regularization
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization
Daniel Jakubovitz
Raja Giryes
AAML
99
210
0
23 Mar 2018
Understanding Measures of Uncertainty for Adversarial Example Detection
Understanding Measures of Uncertainty for Adversarial Example Detection
Lewis Smith
Y. Gal
UQCV
96
365
0
22 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
71
32
0
21 Mar 2018
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
Lei Ma
Felix Juefei Xu
Fuyuan Zhang
Jiyuan Sun
Minhui Xue
...
Ting Su
Li Li
Yang Liu
Jianjun Zhao
Yadong Wang
ELM
82
626
0
20 Mar 2018
Improving Transferability of Adversarial Examples with Input Diversity
Improving Transferability of Adversarial Examples with Input Diversity
Cihang Xie
Zhishuai Zhang
Yuyin Zhou
Song Bai
Jianyu Wang
Zhou Ren
Alan Yuille
AAML
136
1,133
0
19 Mar 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
88
287
0
19 Mar 2018
A Dual Approach to Scalable Verification of Deep Networks
A Dual Approach to Scalable Verification of Deep Networks
Krishnamurthy Dvijotham
Dvijotham
Robert Stanforth
Sven Gowal
Timothy A. Mann
Pushmeet Kohli
70
399
0
17 Mar 2018
Semantic Adversarial Examples
Semantic Adversarial Examples
Hossein Hosseini
Radha Poovendran
GANAAML
108
199
0
16 Mar 2018
Defending against Adversarial Attack towards Deep Neural Networks via
  Collaborative Multi-task Training
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training
Derui Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
74
30
0
14 Mar 2018
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial
  Examples
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
Zihao Liu
Qi Liu
Tao Liu
Nuo Xu
Xue Lin
Yanzhi Wang
Wujie Wen
AAMLMQ
92
265
0
14 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OODAAML
156
508
0
13 Mar 2018
Invisible Mask: Practical Attacks on Face Recognition with Infrared
Invisible Mask: Practical Attacks on Face Recognition with Infrared
Zhe Zhou
Di Tang
Wenyuan Xu
Weili Han
Xiangyu Liu
Kehuan Zhang
CVBMAAML
68
103
0
13 Mar 2018
Testing Deep Neural Networks
Testing Deep Neural Networks
Youcheng Sun
Xiaowei Huang
Daniel Kroening
James Sharp
Matthew Hill
Rob Ashmore
AAML
88
219
0
10 Mar 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
408
3,496
0
09 Mar 2018
Sparse Adversarial Perturbations for Videos
Sparse Adversarial Perturbations for Videos
Xingxing Wei
Jun Zhu
Hang Su
AAML
81
145
0
07 Mar 2018
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with
  Adversarial Examples
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Minhao Cheng
Jinfeng Yi
Pin-Yu Chen
Huan Zhang
Cho-Jui Hsieh
SILMAAML
118
245
0
03 Mar 2018
Protecting JPEG Images Against Adversarial Attacks
Protecting JPEG Images Against Adversarial Attacks
Aaditya (Adi) Prakash
N. Moran
Solomon Garber
Antonella DiLillo
J. Storer
AAML
82
34
0
02 Mar 2018
Adversarial Active Learning for Deep Networks: a Margin Based Approach
Adversarial Active Learning for Deep Networks: a Margin Based Approach
Mélanie Ducoffe
F. Precioso
GANAAML
153
278
0
27 Feb 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAMLSILM
80
99
0
27 Feb 2018
On the Suitability of $L_p$-norms for Creating and Preventing
  Adversarial Examples
On the Suitability of LpL_pLp​-norms for Creating and Preventing Adversarial Examples
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
155
138
0
27 Feb 2018
Max-Mahalanobis Linear Discriminant Analysis Networks
Max-Mahalanobis Linear Discriminant Analysis Networks
Tianyu Pang
Chao Du
Jun Zhu
83
55
0
26 Feb 2018
Verifying Controllers Against Adversarial Examples with Bayesian
  Optimization
Verifying Controllers Against Adversarial Examples with Bayesian Optimization
Shromona Ghosh
Felix Berkenkamp
G. Ranade
S. Qadeer
Ashish Kapoor
AAML
103
45
0
23 Feb 2018
Deep Defense: Training DNNs with Improved Adversarial Robustness
Deep Defense: Training DNNs with Improved Adversarial Robustness
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
97
110
0
23 Feb 2018
Hessian-based Analysis of Large Batch Training and Robustness to
  Adversaries
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries
Z. Yao
A. Gholami
Qi Lei
Kurt Keutzer
Michael W. Mahoney
106
167
0
22 Feb 2018
Unravelling Robustness of Deep Learning based Face Recognition Against
  Adversarial Attacks
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
Gaurav Goswami
Nalini Ratha
Akshay Agarwal
Richa Singh
Mayank Vatsa
AAML
97
166
0
22 Feb 2018
L2-Nonexpansive Neural Networks
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
75
74
0
22 Feb 2018
Generalizable Adversarial Examples Detection Based on Bi-model Decision
  Mismatch
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch
João Monteiro
Isabela Albuquerque
Zahid Akhtar
T. Falk
AAML
90
29
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
174
592
0
21 Feb 2018
On Lyapunov exponents and adversarial perturbation
On Lyapunov exponents and adversarial perturbation
Vinay Uday Prabhu
Nishant Desai
John Whaley
AAML
22
4
0
20 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using
  JPEG Compression
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedMLAAML
98
228
0
19 Feb 2018
Divide, Denoise, and Defend against Adversarial Attacks
Divide, Denoise, and Defend against Adversarial Attacks
Seyed-Mohsen Moosavi-Dezfooli
A. Shrivastava
Oncel Tuzel
AAML
57
45
0
19 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
102
79
0
19 Feb 2018
DARTS: Deceiving Autonomous Cars with Toxic Signs
DARTS: Deceiving Autonomous Cars with Toxic Signs
Chawin Sitawarin
A. Bhagoji
Arsalan Mosenia
M. Chiang
Prateek Mittal
AAML
128
236
0
18 Feb 2018
ASP:A Fast Adversarial Attack Example Generation Framework based on
  Adversarial Saliency Prediction
ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
Fuxun Yu
Qide Dong
Xiang Chen
AAML
62
6
0
15 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
192
606
0
15 Feb 2018
Fooling OCR Systems with Adversarial Text Images
Fooling OCR Systems with Adversarial Text Images
Congzheng Song
Vitaly Shmatikov
AAML
65
51
0
15 Feb 2018
Stealing Hyperparameters in Machine Learning
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
178
467
0
14 Feb 2018
Identify Susceptible Locations in Medical Records via Adversarial
  Attacks on Deep Predictive Models
Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models
Mengying Sun
Fengyi Tang
Jinfeng Yi
Fei Wang
Jiayu Zhou
AAMLOODMedIm
85
63
0
13 Feb 2018
Deceiving End-to-End Deep Learning Malware Detectors using Adversarial
  Examples
Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples
Felix Kreuk
A. Barak
Shir Aviv-Reuven
Moran Baruch
Benny Pinkas
Joseph Keshet
AAML
75
118
0
13 Feb 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation
  Invariance for Deep Neural Networks
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
117
309
0
12 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILMAAML
176
940
0
09 Feb 2018
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
Shoaib Ahmed Siddiqui
Dominique Mercier
Mohsin Munir
Andreas Dengel
Sheraz Ahmed
FAttAI4TS
115
84
0
08 Feb 2018
Previous
123...7778798081
Next