ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks
v1v2 (latest)

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OODAAML
ArXiv (abs)PDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 4,017 papers shown
Title
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
112
81
0
29 Dec 2022
How Do Deepfakes Move? Motion Magnification for Deepfake Source
  Detection
How Do Deepfakes Move? Motion Magnification for Deepfake Source Detection
U. Ciftci
Ilke Demir
67
6
0
28 Dec 2022
Differentiable Search of Accurate and Robust Architectures
Differentiable Search of Accurate and Robust Architectures
Yuwei Ou
Xiangning Xie
Shan Gao
Yanan Sun
Kay Chen Tan
Jiancheng Lv
OODAAML
73
2
0
28 Dec 2022
Publishing Efficient On-device Models Increases Adversarial
  Vulnerability
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
AAML
77
3
0
28 Dec 2022
MVTN: Learning Multi-View Transformations for 3D Understanding
MVTN: Learning Multi-View Transformations for 3D Understanding
Abdullah Hamdi
Faisal AlZahrani
Silvio Giancola
Guohao Li
3DV3DPC
143
6
0
27 Dec 2022
Frequency Regularization for Improving Adversarial Robustness
Frequency Regularization for Improving Adversarial Robustness
Binxiao Huang
Chaofan Tao
R. Lin
Ngai Wong
AAML
39
4
0
24 Dec 2022
Out-of-Distribution Detection with Reconstruction Error and
  Typicality-based Penalty
Out-of-Distribution Detection with Reconstruction Error and Typicality-based Penalty
Genki Osada
Tsubasa Takahashi
Budrul Ahsan
Takashi Nishide
OODD
98
14
0
24 Dec 2022
Aliasing is a Driver of Adversarial Attacks
Aliasing is a Driver of Adversarial Attacks
Adrian Rodriguez-Munoz
Antonio Torralba
AAML
66
0
0
22 Dec 2022
A Theoretical Study of The Effects of Adversarial Attacks on Sparse
  Regression
A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression
Deepak Maurya
Jean Honorio
AAML
76
0
0
21 Dec 2022
Revisiting Residual Networks for Adversarial Robustness: An
  Architectural Perspective
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective
Shihua Huang
Zhichao Lu
Kalyanmoy Deb
Vishnu Boddeti
OOD
102
45
0
21 Dec 2022
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing
Yahan Yang
Soham Dan
Dan Roth
Insup Lee
80
5
0
20 Dec 2022
A Comprehensive Study of the Robustness for LiDAR-based 3D Object
  Detectors against Adversarial Attacks
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial Attacks
Yifan Zhang
Xianqiang Lyu
Yixuan Yuan
AAML3DPC
75
34
0
20 Dec 2022
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven
  Optimization
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization
Bairu Hou
Jinghan Jia
Yihua Zhang
Guanhua Zhang
Yang Zhang
Sijia Liu
Shiyu Chang
SILMAAML
66
24
0
19 Dec 2022
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted
  Attacks
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks
Anqi Zhao
Tong Chu
Yahao Liu
Wen Li
Jingjing Li
Lixin Duan
AAML
74
18
0
18 Dec 2022
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
99
7
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
65
18
0
16 Dec 2022
Adversarial Example Defense via Perturbation Grading Strategy
Adversarial Example Defense via Perturbation Grading Strategy
Shaowei Zhu
Wanli Lyu
Bin Li
Z. Yin
Bin Luo
AAML
78
1
0
16 Dec 2022
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
71
5
0
15 Dec 2022
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Chengzhi Mao
Scott Geng
Junfeng Yang
Xin Eric Wang
Carl Vondrick
VLM
102
71
0
14 Dec 2022
SAIF: Sparse Adversarial and Imperceptible Attack Framework
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz
Morgan Kohler
Jared Miller
Zifeng Wang
Octavia Camps
Mario Sznaier
Octavia Camps
Jennifer Dy
AAML
100
0
0
14 Dec 2022
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
  Detection
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection
P. Lorenz
Margret Keuper
J. Keuper
AAML
97
7
0
13 Dec 2022
Adversarially Robust Video Perception by Seeing Motion
Adversarially Robust Video Perception by Seeing Motion
Lingyu Zhang
Chengzhi Mao
Junfeng Yang
Carl Vondrick
VGenAAML
95
2
0
13 Dec 2022
Robust Perception through Equivariance
Robust Perception through Equivariance
Chengzhi Mao
Lingyu Zhang
Abhishek Joshi
Junfeng Yang
Hongya Wang
Carl Vondrick
BDLAAML
98
8
0
12 Dec 2022
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Nabeel Hingun
Chawin Sitawarin
Jerry Li
David Wagner
AAML
97
15
0
12 Dec 2022
DISCO: Adversarial Defense with Local Implicit Functions
DISCO: Adversarial Defense with Local Implicit Functions
Chih-Hui Ho
Nuno Vasconcelos
AAML
130
39
0
11 Dec 2022
General Adversarial Defense Against Black-box Attacks via Pixel Level
  and Feature Level Distribution Alignments
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments
Xiaogang Xu
Hengshuang Zhao
Philip Torr
Jiaya Jia
AAML
61
2
0
11 Dec 2022
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors
Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors
Giovanni Apruzzese
V. S. Subrahmanian
AAML
87
21
0
11 Dec 2022
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via
  Model Checking
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking
Dennis Gross
T. D. Simão
N. Jansen
G. Pérez
AAML
95
2
0
10 Dec 2022
QVIP: An ILP-based Formal Verification Approach for Quantized Neural
  Networks
QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks
Yedi Zhang
Zhe Zhao
Fu Song
Hao Fei
Tao Chen
Jun Sun
69
18
0
10 Dec 2022
Fairify: Fairness Verification of Neural Networks
Fairify: Fairness Verification of Neural Networks
Sumon Biswas
Hridesh Rajan
81
26
0
08 Dec 2022
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLaTDI
87
4
0
08 Dec 2022
PKDGA: A Partial Knowledge-based Domain Generation Algorithm for Botnets
PKDGA: A Partial Knowledge-based Domain Generation Algorithm for Botnets
Lihai Nie
Xiaoyang Shan
Laiping Zhao
Keqiu Li
66
5
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
71
6
0
06 Dec 2022
QEBVerif: Quantization Error Bound Verification of Neural Networks
QEBVerif: Quantization Error Bound Verification of Neural Networks
Yedi Zhang
Fu Song
Jun Sun
MQ
99
12
0
06 Dec 2022
Multiple Perturbation Attack: Attack Pixelwise Under Different
  $\ell_p$-norms For Better Adversarial Performance
Multiple Perturbation Attack: Attack Pixelwise Under Different ℓp\ell_pℓp​-norms For Better Adversarial Performance
Ngoc N. Tran
Anh Tuan Bui
Dinh Q. Phung
Trung Le
AAML
70
1
0
05 Dec 2022
FaceQAN: Face Image Quality Assessment Through Adversarial Noise
  Exploration
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration
Žiga Babnik
Peter Peer
Vitomir Štruc
CVBMAAML
73
19
0
05 Dec 2022
Bayesian Learning with Information Gain Provably Bounds Risk for a
  Robust Adversarial Defense
Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense
Bao Gia Doan
Ehsan Abbasnejad
Javen Qinfeng Shi
Damith Ranashinghe
AAMLOOD
89
8
0
05 Dec 2022
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning
Mingyuan Fan
Cen Chen
Chengyu Wang
Ximeng Liu
Wenmeng Zhou
AAMLFedML
118
0
0
05 Dec 2022
Analogical Math Word Problems Solving with Enhanced Problem-Solution
  Association
Analogical Math Word Problems Solving with Enhanced Problem-Solution Association
Zhenwen Liang
Jipeng Zhang
Xiangliang Zhang
AIMat
69
17
0
01 Dec 2022
Hijack Vertical Federated Learning Models As One Party
Hijack Vertical Federated Learning Models As One Party
Pengyu Qiu
Xuhong Zhang
Shouling Ji
Changjiang Li
Yuwen Pu
Xing Yang
Ting Wang
FedML
124
6
0
01 Dec 2022
Tight Certification of Adversarially Trained Neural Networks via
  Nonconvex Low-Rank Semidefinite Relaxations
Tight Certification of Adversarially Trained Neural Networks via Nonconvex Low-Rank Semidefinite Relaxations
Hong-Ming Chiu
Richard Y. Zhang
AAML
88
3
0
30 Nov 2022
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for
  Image Classification
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification
Suorong Yang
Jinqiao Li
Jian Zhao
S. Furao
AAML
69
6
0
29 Nov 2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
75
17
0
29 Nov 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
68
9
0
29 Nov 2022
Data Poisoning Attack Aiming the Vulnerability of Continual Learning
Data Poisoning Attack Aiming the Vulnerability of Continual Learning
Gyojin Han
Jaehyun Choi
H. Hong
Junmo Kim
AAML
35
2
0
29 Nov 2022
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces
Xiaoqing Chen
Dongrui Wu
AAML
93
4
0
28 Nov 2022
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning
Xiaoyue Duan
Guoliang Kang
Runqi Wang
Shumin Han
Shenjun Xue
Tian Wang
Baochang Zhang
83
2
0
28 Nov 2022
Imperceptible Adversarial Attack via Invertible Neural Networks
Imperceptible Adversarial Attack via Invertible Neural Networks
Zihan Chen
Zifan Wang
Junjie Huang
Wentao Zhao
Xiao Liu
Dejian Guan
AAML
135
22
0
28 Nov 2022
Adversarial Rademacher Complexity of Deep Neural Networks
Adversarial Rademacher Complexity of Deep Neural Networks
Jiancong Xiao
Yanbo Fan
Ruoyu Sun
Zhimin Luo
AAML
64
23
0
27 Nov 2022
Foiling Explanations in Deep Neural Networks
Foiling Explanations in Deep Neural Networks
Snir Vitrack Tamam
Raz Lapid
Moshe Sipper
AAML
75
17
0
27 Nov 2022
Previous
123...242526...798081
Next