ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OOD
    AAML
ArXivPDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 1,604 papers shown
Title
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using
  Generative Models
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Pouya Samangouei
Maya Kabkab
Rama Chellappa
AAML
GAN
29
1,166
0
17 May 2018
Gradient-Leaks: Understanding and Controlling Deanonymization in
  Federated Learning
Gradient-Leaks: Understanding and Controlling Deanonymization in Federated Learning
Tribhuvanesh Orekondy
Seong Joon Oh
Yang Zhang
Bernt Schiele
Mario Fritz
PICV
FedML
359
37
0
15 May 2018
Hu-Fu: Hardware and Software Collaborative Attack Framework against
  Neural Networks
Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks
Wenshuo Li
Jincheng Yu
Xuefei Ning
Pengjun Wang
Qi Wei
Yu Wang
Huazhong Yang
AAML
39
61
0
14 May 2018
Detecting Adversarial Samples for Deep Neural Networks through Mutation
  Testing
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing
Jingyi Wang
Jun Sun
Peixin Zhang
Xinyu Wang
AAML
21
41
0
14 May 2018
AttriGuard: A Practical Defense Against Attribute Inference Attacks via
  Adversarial Machine Learning
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
Jinyuan Jia
Neil Zhenqiang Gong
AAML
13
161
0
13 May 2018
Curriculum Adversarial Training
Curriculum Adversarial Training
Qi-Zhi Cai
Min Du
Chang-rui Liu
D. Song
AAML
24
160
0
13 May 2018
Adversarially Robust Generalization Requires More Data
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
A. Madry
OOD
AAML
25
785
0
30 Apr 2018
Formal Security Analysis of Neural Networks using Symbolic Intervals
Formal Security Analysis of Neural Networks using Symbolic Intervals
Shiqi Wang
Kexin Pei
Justin Whitehouse
Junfeng Yang
Suman Jana
AAML
25
473
0
28 Apr 2018
Towards Fast Computation of Certified Robustness for ReLU Networks
Towards Fast Computation of Certified Robustness for ReLU Networks
Tsui-Wei Weng
Huan Zhang
Hongge Chen
Zhao Song
Cho-Jui Hsieh
Duane S. Boning
Inderjit S. Dhillon
Luca Daniel
AAML
38
686
0
25 Apr 2018
Towards Dependable Deep Convolutional Neural Networks (CNNs) with
  Out-distribution Learning
Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning
Mahdieh Abbasi
Arezoo Rajabi
Christian Gagné
R. Bobba
OODD
30
6
0
24 Apr 2018
Black-box Adversarial Attacks with Limited Queries and Information
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
MLAU
AAML
46
1,190
0
23 Apr 2018
VectorDefense: Vectorization as a Defense to Adversarial Examples
VectorDefense: Vectorization as a Defense to Adversarial Examples
V. Kabilan
Brandon L. Morris
Anh Totti Nguyen
AAML
22
21
0
23 Apr 2018
ADef: an Iterative Algorithm to Construct Adversarial Deformations
ADef: an Iterative Algorithm to Construct Adversarial Deformations
Rima Alaifari
Giovanni S. Alberti
Tandri Gauksson
AAML
19
96
0
20 Apr 2018
Semantic Adversarial Deep Learning
Semantic Adversarial Deep Learning
S. Seshia
S. Jha
T. Dreossi
AAML
SILM
27
90
0
19 Apr 2018
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object
  Detector
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Shang-Tse Chen
Cory Cornelius
Jason Martin
Duen Horng Chau
ObjD
165
424
0
16 Apr 2018
Global Robustness Evaluation of Deep Neural Networks with Provable
  Guarantees for the $L_0$ Norm
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the L0L_0L0​ Norm
Wenjie Ruan
Min Wu
Youcheng Sun
Xiaowei Huang
Daniel Kroening
Marta Kwiatkowska
AAML
11
38
0
16 Apr 2018
Adversarial Attacks Against Medical Deep Learning Systems
Adversarial Attacks Against Medical Deep Learning Systems
S. G. Finlayson
Hyung Won Chung
I. Kohane
Andrew L. Beam
SILM
AAML
OOD
MedIm
25
230
0
15 Apr 2018
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural
  Networks
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
Pu Zhao
Sijia Liu
Yanzhi Wang
X. Lin
AAML
14
37
0
09 Apr 2018
The Effects of JPEG and JPEG2000 Compression on Attacks using
  Adversarial Examples
The Effects of JPEG and JPEG2000 Compression on Attacks using Adversarial Examples
Ayse Elvan Aydemir
A. Temi̇zel
T. Taşkaya-Temizel
AAML
8
30
0
28 Mar 2018
Bypassing Feature Squeezing by Increasing Adversary Strength
Bypassing Feature Squeezing by Increasing Adversary Strength
Yash Sharma
Pin-Yu Chen
AAML
11
34
0
27 Mar 2018
On the Limitation of Local Intrinsic Dimensionality for Characterizing
  the Subspaces of Adversarial Examples
On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
11
26
0
26 Mar 2018
An Overview of Vulnerabilities of Voice Controlled Systems
An Overview of Vulnerabilities of Voice Controlled Systems
Yuan Gong
C. Poellabauer
19
32
0
24 Mar 2018
Understanding Measures of Uncertainty for Adversarial Example Detection
Understanding Measures of Uncertainty for Adversarial Example Detection
Lewis Smith
Y. Gal
UQCV
52
358
0
22 Mar 2018
Adversarial Defense based on Structure-to-Signal Autoencoders
Adversarial Defense based on Structure-to-Signal Autoencoders
Joachim Folz
Sebastián M. Palacio
Jörn Hees
Damian Borth
Andreas Dengel
AAML
26
32
0
21 Mar 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
31
286
0
19 Mar 2018
Semantic Adversarial Examples
Semantic Adversarial Examples
Hossein Hosseini
Radha Poovendran
GAN
AAML
31
196
0
16 Mar 2018
Defending against Adversarial Attack towards Deep Neural Networks via
  Collaborative Multi-task Training
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training
Derui Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
41
29
0
14 Mar 2018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust
  Deep Learning
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Nicolas Papernot
Patrick McDaniel
OOD
AAML
13
503
0
13 Mar 2018
Sparse Adversarial Perturbations for Videos
Sparse Adversarial Perturbations for Videos
Xingxing Wei
Jun Zhu
Hang Su
AAML
14
138
0
07 Mar 2018
Protecting JPEG Images Against Adversarial Attacks
Protecting JPEG Images Against Adversarial Attacks
Aaditya (Adi) Prakash
N. Moran
Solomon Garber
Antonella DiLillo
J. Storer
AAML
23
34
0
02 Mar 2018
Understanding and Enhancing the Transferability of Adversarial Examples
Understanding and Enhancing the Transferability of Adversarial Examples
Lei Wu
Zhanxing Zhu
Cheng Tai
E. Weinan
AAML
SILM
30
96
0
27 Feb 2018
Robust GANs against Dishonest Adversaries
Robust GANs against Dishonest Adversaries
Zhi Xu
Chengtao Li
Stefanie Jegelka
AAML
34
3
0
27 Feb 2018
On the Suitability of $L_p$-norms for Creating and Preventing
  Adversarial Examples
On the Suitability of LpL_pLp​-norms for Creating and Preventing Adversarial Examples
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
24
138
0
27 Feb 2018
Verifying Controllers Against Adversarial Examples with Bayesian
  Optimization
Verifying Controllers Against Adversarial Examples with Bayesian Optimization
Shromona Ghosh
Felix Berkenkamp
G. Ranade
S. Qadeer
Ashish Kapoor
AAML
33
45
0
23 Feb 2018
Deep Defense: Training DNNs with Improved Adversarial Robustness
Deep Defense: Training DNNs with Improved Adversarial Robustness
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
38
109
0
23 Feb 2018
Unravelling Robustness of Deep Learning based Face Recognition Against
  Adversarial Attacks
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
Gaurav Goswami
Nalini Ratha
Akshay Agarwal
Richa Singh
Mayank Vatsa
AAML
21
165
0
22 Feb 2018
L2-Nonexpansive Neural Networks
L2-Nonexpansive Neural Networks
Haifeng Qian
M. Wegman
25
74
0
22 Feb 2018
Generalizable Adversarial Examples Detection Based on Bi-model Decision
  Mismatch
Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch
João Monteiro
Isabela Albuquerque
Zahid Akhtar
T. Falk
AAML
41
29
0
21 Feb 2018
On Lyapunov exponents and adversarial perturbation
On Lyapunov exponents and adversarial perturbation
Vinay Uday Prabhu
Nishant Desai
John Whaley
AAML
20
4
0
20 Feb 2018
Shield: Fast, Practical Defense and Vaccination for Deep Learning using
  JPEG Compression
Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Nilaksh Das
Madhuri Shanbhogue
Shang-Tse Chen
Fred Hohman
Siwei Li
Li-Wei Chen
Michael E. Kounavis
Duen Horng Chau
FedML
AAML
43
225
0
19 Feb 2018
Are Generative Classifiers More Robust to Adversarial Attacks?
Are Generative Classifiers More Robust to Adversarial Attacks?
Yingzhen Li
John Bradshaw
Yash Sharma
AAML
57
78
0
19 Feb 2018
Stealing Hyperparameters in Machine Learning
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
45
458
0
14 Feb 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory
  Approach
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
25
463
0
31 Jan 2018
Adversarial Texts with Gradient Methods
Zhitao Gong
Wenlu Wang
Yangqiu Song
D. Song
Wei-Shinn Ku
AAML
34
77
0
22 Jan 2018
Black-box Generation of Adversarial Text Sequences to Evade Deep
  Learning Classifiers
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Ji Gao
Jack Lanchantin
M. Soffa
Yanjun Qi
AAML
47
707
0
13 Jan 2018
Less is More: Culling the Training Set to Improve Robustness of Deep
  Neural Networks
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks
Yongshuai Liu
Jiyu Chen
Hao Chen
AAML
27
14
0
09 Jan 2018
Characterizing Adversarial Subspaces Using Local Intrinsic
  Dimensionality
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
Bo-wen Li
Yisen Wang
S. Erfani
S. Wijewickrema
Grant Schoenebeck
D. Song
Michael E. Houle
James Bailey
AAML
40
728
0
08 Jan 2018
Deep Fingerprinting: Undermining Website Fingerprinting Defenses with
  Deep Learning
Deep Fingerprinting: Undermining Website Fingerprinting Defenses with Deep Learning
Payap Sirinam
Mohsen Imani
Marc Juárez
M. Wright
19
453
0
07 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
38
1,074
0
05 Jan 2018
Did you hear that? Adversarial Examples Against Automatic Speech
  Recognition
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
M. Alzantot
Bharathan Balaji
Mani B. Srivastava
AAML
18
251
0
02 Jan 2018
Previous
123...30313233
Next