ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.04644
  4. Cited By
Towards Evaluating the Robustness of Neural Networks

Towards Evaluating the Robustness of Neural Networks

16 August 2016
Nicholas Carlini
D. Wagner
    OOD
    AAML
ArXivPDFHTML

Papers citing "Towards Evaluating the Robustness of Neural Networks"

50 / 1,466 papers shown
Title
CosPGD: an efficient white-box adversarial attack for pixel-wise
  prediction tasks
CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks
Shashank Agnihotri
Steffen Jung
M. Keuper
AAML
37
21
0
04 Feb 2023
On the Robustness of Randomized Ensembles to Adversarial Perturbations
On the Robustness of Randomized Ensembles to Adversarial Perturbations
Hassan Dbouk
Naresh R Shanbhag
AAML
23
7
0
02 Feb 2023
Interpreting Robustness Proofs of Deep Neural Networks
Interpreting Robustness Proofs of Deep Neural Networks
Debangshu Banerjee
Avaljot Singh
Gagandeep Singh
AAML
29
5
0
31 Jan 2023
Are Defenses for Graph Neural Networks Robust?
Are Defenses for Graph Neural Networks Robust?
Felix Mujkanovic
Simon Geisler
Stephan Günnemann
Aleksandar Bojchevski
OOD
AAML
21
56
0
31 Jan 2023
Inference Time Evidences of Adversarial Attacks for Forensic on
  Transformers
Inference Time Evidences of Adversarial Attacks for Forensic on Transformers
Hugo Lemarchant
Liang Li
Yiming Qian
Yuta Nakashima
Hajime Nagahara
ViT
AAML
43
0
0
31 Jan 2023
Language-Driven Anchors for Zero-Shot Adversarial Robustness
Language-Driven Anchors for Zero-Shot Adversarial Robustness
Xiao-Li Li
Wei Emma Zhang
Yining Liu
Zhan Hu
Bo-Wen Zhang
Xiaolin Hu
34
8
0
30 Jan 2023
On the Efficacy of Metrics to Describe Adversarial Attacks
On the Efficacy of Metrics to Describe Adversarial Attacks
Tommaso Puccetti
T. Zoppi
Andrea Ceccarelli
AAML
17
2
0
30 Jan 2023
Benchmarking Robustness to Adversarial Image Obfuscations
Benchmarking Robustness to Adversarial Image Obfuscations
Florian Stimberg
Ayan Chakrabarti
Chun-Ta Lu
Hussein Hazimeh
Otilia Stretcu
...
Merve Kaya
Cyrus Rashtchian
Ariel Fuxman
Mehmet Tek
Sven Gowal
AAML
32
10
0
30 Jan 2023
Identifying Adversarially Attackable and Robust Samples
Identifying Adversarially Attackable and Robust Samples
Vyas Raina
Mark J. F. Gales
AAML
33
3
0
30 Jan 2023
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive
  Smoothing
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
Yatong Bai
Brendon G. Anderson
Aerin Kim
Somayeh Sojoudi
AAML
36
18
0
29 Jan 2023
Learning to Unlearn: Instance-wise Unlearning for Pre-trained
  Classifiers
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Sungmin Cha
Sungjun Cho
Dasol Hwang
Honglak Lee
Taesup Moon
Moontae Lee
MU
44
35
0
27 Jan 2023
Certified Interpretability Robustness for Class Activation Mapping
Certified Interpretability Robustness for Class Activation Mapping
Alex Gu
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Lucani E. Daniel
AAML
29
2
0
26 Jan 2023
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
  Networks
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks
Fenglei Fan
Yingxin Li
Hanchuan Peng
T. Zeng
Fei Wang
25
5
0
23 Jan 2023
Towards Rigorous Understanding of Neural Networks via
  Semantics-preserving Transformations
Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations
Maximilian Schlüter
Gerrit Nolte
Alnis Murtovi
Bernhard Steffen
29
6
0
19 Jan 2023
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic
  Optimization
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
Le‐Yu Chen
Jing Xu
Luo Luo
31
15
0
16 Jan 2023
Meta Generative Attack on Person Reidentification
Meta Generative Attack on Person Reidentification
M. I. A V Subramanyam
AAML
28
8
0
16 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
24
14
0
16 Jan 2023
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
  Encoder as a Service
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SILM
AAML
34
4
0
07 Jan 2023
Beckman Defense
Beckman Defense
A. V. Subramanyam
OOD
AAML
42
0
0
04 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
44
2
0
03 Jan 2023
Generalizable Black-Box Adversarial Attack with Meta Learning
Generalizable Black-Box Adversarial Attack with Meta Learning
Fei Yin
Yong Zhang
Baoyuan Wu
Yan Feng
Jingyi Zhang
Yanbo Fan
Yujiu Yang
AAML
29
27
0
01 Jan 2023
ExploreADV: Towards exploratory attack for Neural Networks
ExploreADV: Towards exploratory attack for Neural Networks
Tianzuo Luo
Yuyi Zhong
S. Khoo
AAML
24
1
0
01 Jan 2023
Tracing the Origin of Adversarial Attack for Forensic Investigation and
  Deterrence
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence
Han Fang
Jiyi Zhang
Yupeng Qiu
Ke Xu
Chengfang Fang
E. Chang
AAML
33
2
0
31 Dec 2022
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Muzammal Naseer
Salman Khan
Fatih Porikli
Fahad Shahbaz Khan
AAML
28
1
0
30 Dec 2022
ComplAI: Theory of A Unified Framework for Multi-factor Assessment of
  Black-Box Supervised Machine Learning Models
ComplAI: Theory of A Unified Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models
Arkadipta De
Satya Swaroop Gudipudi
Sourab Panchanan
M. Desarkar
FaML
13
0
0
30 Dec 2022
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
MVTN: Learning Multi-View Transformations for 3D Understanding
MVTN: Learning Multi-View Transformations for 3D Understanding
Abdullah Hamdi
Faisal AlZahrani
Silvio Giancola
Guohao Li
3DPC
3DV
34
6
0
27 Dec 2022
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted
  Attacks
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks
Anqi Zhao
Tong Chu
Yahao Liu
Wen Li
Jingjing Li
Lixin Duan
AAML
28
16
0
18 Dec 2022
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
21
7
0
18 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
29
18
0
16 Dec 2022
Adversarial Example Defense via Perturbation Grading Strategy
Adversarial Example Defense via Perturbation Grading Strategy
Shaowei Zhu
Wanli Lyu
Bin Li
Z. Yin
Bin Luo
AAML
27
1
0
16 Dec 2022
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks
Nikolaos Antoniou
Efthymios Georgiou
Alexandros Potamianos
AAML
29
5
0
15 Dec 2022
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Chengzhi Mao
Scott Geng
Junfeng Yang
Xin Eric Wang
Carl Vondrick
VLM
44
59
0
14 Dec 2022
SAIF: Sparse Adversarial and Imperceptible Attack Framework
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz
Morgan Kohler
Jared Miller
Zifeng Wang
Octavia Camps
Mario Sznaier
Octavia Camps
Jennifer Dy
AAML
34
0
0
14 Dec 2022
Adversarially Robust Video Perception by Seeing Motion
Adversarially Robust Video Perception by Seeing Motion
Lingyu Zhang
Chengzhi Mao
Junfeng Yang
Carl Vondrick
VGen
AAML
44
2
0
13 Dec 2022
Robust Perception through Equivariance
Robust Perception through Equivariance
Chengzhi Mao
Lingyu Zhang
Abhishek Joshi
Junfeng Yang
Hongya Wang
Carl Vondrick
BDL
AAML
29
7
0
12 Dec 2022
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Nabeel Hingun
Chawin Sitawarin
Jerry Li
David Wagner
AAML
31
14
0
12 Dec 2022
General Adversarial Defense Against Black-box Attacks via Pixel Level
  and Feature Level Distribution Alignments
General Adversarial Defense Against Black-box Attacks via Pixel Level and Feature Level Distribution Alignments
Xiaogang Xu
Hengshuang Zhao
Philip Torr
Jiaya Jia
AAML
32
2
0
11 Dec 2022
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via
  Model Checking
Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking
Dennis Gross
T. D. Simão
N. Jansen
G. Pérez
AAML
46
2
0
10 Dec 2022
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
34
4
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
QEBVerif: Quantization Error Bound Verification of Neural Networks
QEBVerif: Quantization Error Bound Verification of Neural Networks
Yedi Zhang
Fu Song
Jun Sun
MQ
26
11
0
06 Dec 2022
Multiple Perturbation Attack: Attack Pixelwise Under Different
  $\ell_p$-norms For Better Adversarial Performance
Multiple Perturbation Attack: Attack Pixelwise Under Different ℓp\ell_pℓp​-norms For Better Adversarial Performance
Ngoc N. Tran
Anh Tuan Bui
Dinh Q. Phung
Trung Le
AAML
29
1
0
05 Dec 2022
FaceQAN: Face Image Quality Assessment Through Adversarial Noise
  Exploration
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration
Žiga Babnik
Peter Peer
Vitomir Štruc
CVBM
AAML
36
19
0
05 Dec 2022
Refiner: Data Refining against Gradient Leakage Attacks in Federated
  Learning
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning
Mingyuan Fan
Cen Chen
Chengyu Wang
Ximeng Liu
Wenmeng Zhou
Jun Huang
AAML
FedML
34
0
0
05 Dec 2022
Analogical Math Word Problems Solving with Enhanced Problem-Solution
  Association
Analogical Math Word Problems Solving with Enhanced Problem-Solution Association
Zhenwen Liang
Jipeng Zhang
Xiangliang Zhang
AIMat
34
17
0
01 Dec 2022
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for
  Image Classification
AdvMask: A Sparse Adversarial Attack Based Data Augmentation Method for Image Classification
Suorong Yang
Jinqiao Li
Jian Zhao
S. Furao
AAML
33
6
0
29 Nov 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
33
9
0
29 Nov 2022
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces
Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces
Xiaoqing Chen
Dongrui Wu
AAML
30
2
0
28 Nov 2022
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning
Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning
Xiaoyue Duan
Guoliang Kang
Runqi Wang
Shumin Han
Shenjun Xue
Tian Wang
Baochang Zhang
29
2
0
28 Nov 2022
Previous
123...678...282930
Next