ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OOD
    AAML
ArXivPDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 1,583 papers shown
Title
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Purifying Adversarial Perturbation with Adversarially Trained
  Auto-encoders
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders
Hebi Li
Qi Xiao
Shixin Tian
Jin Tian
AAML
24
4
0
26 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-All
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
24
97
0
25 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Thwarting finite difference adversarial attacks with output
  randomization
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILM
AAML
33
0
0
23 May 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
36
56
0
22 May 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
21
35
0
22 May 2019
Testing DNN Image Classifiers for Confusion & Bias Errors
Testing DNN Image Classifiers for Confusion & Bias Errors
Yuchi Tian
Ziyuan Zhong
Vicente Ordonez
Gail E. Kaiser
Baishakhi Ray
24
52
0
20 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the
  Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer
  Domain
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
32
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
Robustification of deep net classifiers by key based diversified
  aggregation with pre-filtering
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering
O. Taran
Shideh Rezaeifar
T. Holotyak
Slava Voloshynovskiy
AAML
30
1
0
14 May 2019
Interpreting and Evaluating Neural Network Robustness
Interpreting and Evaluating Neural Network Robustness
Fuxun Yu
Zhuwei Qin
Chenchen Liu
Liang Zhao
Yanzhi Wang
Xiang Chen
AAML
15
55
0
10 May 2019
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via
  Genetic Algorithm
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm
Jinyin Chen
Mengmeng Su
Shijing Shen
Hui Xiong
Haibin Zheng
AAML
22
67
0
01 May 2019
Test Selection for Deep Learning Systems
Test Selection for Deep Learning Systems
Wei Ma
Mike Papadakis
Anestis Tsakmalis
Maxime Cordy
Yves Le Traon
OOD
21
92
0
30 Apr 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
28
375
0
30 Apr 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
68
1,227
0
29 Apr 2019
Data Poisoning Attack against Knowledge Graph Embedding
Data Poisoning Attack against Knowledge Graph Embedding
Hengtong Zhang
T. Zheng
Jing Gao
Chenglin Miao
Lu Su
Yaliang Li
K. Ren
KELM
18
81
0
26 Apr 2019
Robustness Verification of Support Vector Machines
Robustness Verification of Support Vector Machines
Francesco Ranzato
Marco Zanella
AAML
21
17
0
26 Apr 2019
Fooling automated surveillance cameras: adversarial patches to attack
  person detection
Fooling automated surveillance cameras: adversarial patches to attack person detection
Simen Thys
W. V. Ranst
Toon Goedemé
AAML
28
564
0
18 Apr 2019
Adversarial Defense Through Network Profiling Based Path Extraction
Adversarial Defense Through Network Profiling Based Path Extraction
Yuxian Qiu
Jingwen Leng
Cong Guo
Quan Chen
Chong Li
M. Guo
Yuhao Zhu
AAML
24
50
0
17 Apr 2019
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural
  Networks for Steering Angle Prediction
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction
Alesia Chernikova
Alina Oprea
Cristina Nita-Rotaru
Baekgyu Kim
AAML
13
72
0
15 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
Efficient Decision-based Black-box Adversarial Attacks on Face
  Recognition
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
Yinpeng Dong
Hang Su
Baoyuan Wu
Zhifeng Li
Wei Liu
Tong Zhang
Jun Zhu
CVBM
AAML
28
405
0
09 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
33
46
0
08 Apr 2019
On Training Robust PDF Malware Classifiers
On Training Robust PDF Malware Classifiers
Yizheng Chen
Shiqi Wang
Dongdong She
Suman Jana
AAML
50
68
0
06 Apr 2019
Evading Defenses to Transferable Adversarial Examples by
  Translation-Invariant Attacks
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
SILM
AAML
49
829
0
05 Apr 2019
Interpreting Adversarial Examples by Activation Promotion and
  Suppression
Interpreting Adversarial Examples by Activation Promotion and Suppression
Kaidi Xu
Sijia Liu
Gaoyuan Zhang
Mengshu Sun
Pu Zhao
Quanfu Fan
Chuang Gan
X. Lin
AAML
FAtt
24
43
0
03 Apr 2019
Adversarial Defense by Restricting the Hidden Space of Deep Neural
  Networks
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks
Aamir Mustafa
Salman Khan
Munawar Hayat
Roland Göcke
Jianbing Shen
Ling Shao
AAML
17
151
0
01 Apr 2019
Defending against adversarial attacks by randomized diversification
Defending against adversarial attacks by randomized diversification
O. Taran
Shideh Rezaeifar
T. Holotyak
Slava Voloshynovskiy
AAML
21
38
0
01 Apr 2019
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
On the Vulnerability of CNN Classifiers in EEG-Based BCIs
Xiao Zhang
Dongrui Wu
AAML
24
82
0
31 Mar 2019
Text Processing Like Humans Do: Visually Attacking and Shielding NLP
  Systems
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems
Steffen Eger
Gözde Gül Sahin
Andreas Rucklé
Ji-Ung Lee
Claudia Schulz
Mohsen Mesgar
Krishnkant Swarnkar
Edwin Simpson
Iryna Gurevych
AAML
46
156
0
27 Mar 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
A geometry-inspired decision-based attack
A geometry-inspired decision-based attack
Yujia Liu
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
24
51
0
26 Mar 2019
Defending against Whitebox Adversarial Attacks via Randomized
  Discretization
Defending against Whitebox Adversarial Attacks via Randomized Discretization
Yuchen Zhang
Percy Liang
AAML
32
75
0
25 Mar 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
26
25
0
25 Mar 2019
Variational Inference with Latent Space Quantization for Adversarial
  Resilience
Variational Inference with Latent Space Quantization for Adversarial Resilience
Vinay Kyatham
P. PrathoshA.
Tarun Kumar Yadav
Deepak Mishra
Dheeraj Mundhra
AAML
19
3
0
24 Mar 2019
Scalable Differential Privacy with Certified Robustness in Adversarial
  Learning
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Nhathai Phan
My T. Thai
Han Hu
R. Jin
Tong Sun
Dejing Dou
27
14
0
23 Mar 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
38
377
0
22 Mar 2019
Adversarial camera stickers: A physical camera-based attack on deep
  learning systems
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Juncheng Billy Li
Frank R. Schmidt
J. Zico Kolter
AAML
11
164
0
21 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the
  Union of Polytopes
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
21
57
0
20 Mar 2019
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Susmit Jha
Sunny Raj
S. Fernandes
Sumit Kumar Jha
S. Jha
Gunjan Verma
B. Jalaeian
A. Swami
AAML
12
17
0
14 Mar 2019
Smart Home Personal Assistants: A Security and Privacy Review
Smart Home Personal Assistants: A Security and Privacy Review
Jide S. Edu
Jose Such
Guillermo Suarez-Tangil
8
92
0
13 Mar 2019
Neural Network Model Extraction Attacks in Edge Devices by Hearing
  Architectural Hints
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
Xing Hu
Ling Liang
Lei Deng
Shuangchen Li
Xinfeng Xie
Yu Ji
Yufei Ding
Chang Liu
T. Sherwood
Yuan Xie
AAML
MLAU
21
36
0
10 Mar 2019
Semantics Preserving Adversarial Learning
Semantics Preserving Adversarial Learning
Ousmane Amadou Dia
Elnaz Barshan
Reza Babanezhad
AAML
GAN
29
2
0
10 Mar 2019
A Learnable ScatterNet: Locally Invariant Convolutional Layers
A Learnable ScatterNet: Locally Invariant Convolutional Layers
Fergal Cotter
N. Kingsbury
17
22
0
07 Mar 2019
Attack Type Agnostic Perceptual Enhancement of Adversarial Images
Attack Type Agnostic Perceptual Enhancement of Adversarial Images
Bilgin Aksoy
A. Temi̇zel
AAML
21
5
0
07 Mar 2019
Detecting Overfitting via Adversarial Examples
Detecting Overfitting via Adversarial Examples
Roman Werpachowski
András Gyorgy
Csaba Szepesvári
TDI
26
45
0
06 Mar 2019
Statistical Guarantees for the Robustness of Bayesian Neural Networks
Statistical Guarantees for the Robustness of Bayesian Neural Networks
L. Cardelli
Marta Kwiatkowska
Luca Laurenti
Nicola Paoletti
A. Patané
Matthew Wicker
AAML
31
54
0
05 Mar 2019
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor
  Search
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search
Abhimanyu Dubey
L. V. D. van der Maaten
Zeki Yalniz
Yixuan Li
D. Mahajan
AAML
33
62
0
05 Mar 2019
Previous
123...262728...303132
Next