Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.04404
Cited By
v1
v2 (latest)
Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks
8 November 2021
Lijia Yu
Xiao-Shan Gao
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks"
4 / 4 papers shown
Title
Achieve Optimal Adversarial Accuracy for Adversarial Deep Learning using Stackelberg Game
Xiao-Shan Gao
Shuang Liu
Lijia Yu
AAML
78
0
0
17 Jul 2022
Adversarial Parameter Attack on Deep Neural Networks
Lijia Yu
Yihan Wang
Xiao-Shan Gao
AAML
76
8
0
20 Mar 2022
The mathematics of adversarial attacks in AI -- Why deep learning is unstable despite the existence of stable neural networks
Alexander Bastounis
A. Hansen
Verner Vlacic
AAML
OOD
105
28
0
13 Sep 2021
A Robust Classification-autoencoder to Defend Outliers and Adversaries
Lijia Yu
Xiao-Shan Gao
AAML
70
2
0
30 Jun 2021
1