ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OOD
    AAML
ArXivPDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 1,676 papers shown
Title
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
19
38
0
01 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
14
4
0
01 Jul 2020
Improving Calibration through the Relationship with Adversarial
  Robustness
Improving Calibration through the Relationship with Adversarial Robustness
Yao Qin
Xuezhi Wang
Alex Beutel
Ed H. Chi
AAML
40
25
0
29 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
34
66
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron
  Relaxations for Neural Network Verification
The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification
Christian Tjandraatmadja
Ross Anderson
Joey Huchette
Will Ma
Krunal Patel
J. Vielma
AAML
27
89
0
24 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
39
99
0
23 Jun 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
  Ensemble
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
39
48
0
20 Jun 2020
Beware the Black-Box: on the Robustness of Recent Defenses to
  Adversarial Examples
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Kaleel Mahmood
Deniz Gurevin
Marten van Dijk
Phuong Ha Nguyen
AAML
25
22
0
18 Jun 2020
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
  Training
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training
Eran Segalis
Eran Galili
22
16
0
17 Jun 2020
AdvMind: Inferring Adversary Intent of Black-Box Attacks
AdvMind: Inferring Adversary Intent of Black-Box Attacks
Ren Pang
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
MLAU
AAML
11
29
0
16 Jun 2020
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
  Faster Adversarial Robustness Proofs
Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs
Christopher Brix
T. Noll
AAML
25
10
0
16 Jun 2020
SPLASH: Learnable Activation Functions for Improving Accuracy and
  Adversarial Robustness
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness
Mohammadamin Tavakoli
Forest Agostinelli
Pierre Baldi
AAML
FAtt
36
39
0
16 Jun 2020
Boosting Black-Box Attack with Partially Transferred Conditional
  Adversarial Distribution
Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution
Yan Feng
Baoyuan Wu
Yanbo Fan
Li Liu
Zhifeng Li
Shutao Xia
AAML
32
6
0
15 Jun 2020
PatchUp: A Feature-Space Block-Level Regularization Technique for
  Convolutional Neural Networks
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
Mojtaba Faramarzi
Mohammad Amini
Akilesh Badrinaaraayanan
Vikas Verma
A. Chandar
AAML
36
31
0
14 Jun 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate Computing
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
19
37
0
13 Jun 2020
Adversarial Self-Supervised Contrastive Learning
Adversarial Self-Supervised Contrastive Learning
Minseon Kim
Jihoon Tack
Sung Ju Hwang
SSL
28
247
0
13 Jun 2020
Defending against GAN-based Deepfake Attacks via Transformation-aware
  Adversarial Faces
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces
Chaofei Yang
Lei Ding
Yiran Chen
H. Li
AAML
27
46
0
12 Jun 2020
D-square-B: Deep Distribution Bound for Natural-looking Adversarial
  Attack
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Qiuling Xu
Guanhong Tao
Xiangyu Zhang
AAML
22
2
0
12 Jun 2020
Towards Robust Pattern Recognition: A Review
Towards Robust Pattern Recognition: A Review
Xu-Yao Zhang
Cheng-Lin Liu
C. Suen
OOD
HAI
26
103
0
12 Jun 2020
Backdoors in Neural Models of Source Code
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
28
56
0
11 Jun 2020
A Primer on Zeroth-Order Optimization in Signal Processing and Machine
  Learning
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
Sijia Liu
Pin-Yu Chen
B. Kailkhura
Gaoyuan Zhang
A. Hero III
P. Varshney
26
224
0
11 Jun 2020
Exploring the Vulnerability of Deep Neural Networks: A Study of
  Parameter Corruption
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
Xu Sun
Zhiyuan Zhang
Xuancheng Ren
Ruixuan Luo
Liangyou Li
30
39
0
10 Jun 2020
GAP++: Learning to generate target-conditioned adversarial examples
GAP++: Learning to generate target-conditioned adversarial examples
Xiaofeng Mao
YueFeng Chen
Yuhong Li
Yuan He
Hui Xue
AAML
18
8
0
09 Jun 2020
Towards More Practical Adversarial Attacks on Graph Neural Networks
Towards More Practical Adversarial Attacks on Graph Neural Networks
Jiaqi Ma
Shuangrui Ding
Qiaozhu Mei
AAML
17
121
0
09 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
A Self-supervised Approach for Adversarial Robustness
A Self-supervised Approach for Adversarial Robustness
Muzammal Naseer
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Fatih Porikli
AAML
24
251
0
08 Jun 2020
Tricking Adversarial Attacks To Fail
Tricking Adversarial Attacks To Fail
Blerta Lindqvist
AAML
16
0
0
08 Jun 2020
Global Robustness Verification Networks
Global Robustness Verification Networks
Weidi Sun
Yuteng Lu
Xiyue Zhang
Zhanxing Zhu
Meng Sun
AAML
22
2
0
08 Jun 2020
Are We Hungry for 3D LiDAR Data for Semantic Segmentation? A Survey and
  Experimental Study
Are We Hungry for 3D LiDAR Data for Semantic Segmentation? A Survey and Experimental Study
Biao Gao
Yancheng Pan
Chengkun Li
Sibo Geng
Huijing Zhao
3DPC
26
25
0
08 Jun 2020
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label
  Classifiers
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers
S. Melacci
Gabriele Ciravegna
Angelo Sotgiu
Ambra Demontis
Battista Biggio
Marco Gori
Fabio Roli
22
14
0
06 Jun 2020
Exploring the role of Input and Output Layers of a Deep Neural Network
  in Adversarial Defense
Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense
Jay N. Paranjape
R. Dubey
Vijendran V. Gopalan
AAML
25
2
0
02 Jun 2020
Adversarial Attacks on Classifiers for Eye-based User Modelling
Adversarial Attacks on Classifiers for Eye-based User Modelling
Inken Hagestedt
Michael Backes
Andreas Bulling
AAML
24
6
0
01 Jun 2020
QEBA: Query-Efficient Boundary-Based Blackbox Attack
QEBA: Query-Efficient Boundary-Based Blackbox Attack
Huichen Li
Xiaojun Xu
Xiaolu Zhang
Shuang Yang
Bo-wen Li
AAML
21
178
0
28 May 2020
Adversarial Classification via Distributional Robustness with
  Wasserstein Ambiguity
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
Nam Ho-Nguyen
Stephen J. Wright
OOD
50
16
0
28 May 2020
Arms Race in Adversarial Malware Detection: A Survey
Arms Race in Adversarial Malware Detection: A Survey
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
24
52
0
24 May 2020
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds
ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds
Kibok Lee
Zhuoyuan Chen
Xinchen Yan
R. Urtasun
Ersin Yumer
3DPC
23
30
0
24 May 2020
Pythia: Grammar-Based Fuzzing of REST APIs with Coverage-guided Feedback
  and Learning-based Mutations
Pythia: Grammar-Based Fuzzing of REST APIs with Coverage-guided Feedback and Learning-based Mutations
Vaggelis Atlidakis
Roxana Geambasu
Patrice Godefroid
Marina Polishchuk
Baishakhi Ray
20
38
0
23 May 2020
SINVAD: Search-based Image Space Navigation for DNN Image Classifier
  Test Input Generation
SINVAD: Search-based Image Space Navigation for DNN Image Classifier Test Input Generation
Sungmin Kang
R. Feldt
S. Yoo
AAML
26
32
0
19 May 2020
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial
  Robustness of Neural Networks
Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks
Linhai Ma
Liang Liang
AAML
26
18
0
19 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Universal Adversarial Perturbations: A Survey
Universal Adversarial Perturbations: A Survey
Ashutosh Chaubey
Nikhil Agrawal
Kavya Barnwal
K. K. Guliani
Pramod Mehta
OOD
AAML
42
46
0
16 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
24
32
0
16 May 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based
  Action Recognition
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
21
17
0
14 May 2020
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses
Yaxin Li
Wei Jin
Han Xu
Jiliang Tang
AAML
32
131
0
13 May 2020
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless
  Signal Classifiers
Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers
Brian Kim
Y. Sagduyu
Kemal Davaslioglu
T. Erpek
S. Ulukus
AAML
23
111
0
11 May 2020
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Lu Wang
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Yuan Jiang
AAML
35
12
0
11 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
27
119
0
08 May 2020
Towards Frequency-Based Explanation for Robust CNN
Towards Frequency-Based Explanation for Robust CNN
Zifan Wang
Yilin Yang
Ankit Shrivastava
Varun Rawal
Zihao Ding
AAML
FAtt
21
47
0
06 May 2020
Previous
123...222324...323334
Next