Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.06121
Cited By
To be Robust or to be Fair: Towards Fairness in Adversarial Training
13 October 2020
Han Xu
Xiaorui Liu
Yaxin Li
Anil K. Jain
Jiliang Tang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"To be Robust or to be Fair: Towards Fairness in Adversarial Training"
32 / 32 papers shown
Title
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho
Hongsin Lee
Changick Kim
AAML
TTA
230
0
0
09 Mar 2025
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers
Huan Tian
Guangsheng Zhang
Bo Liu
Tianqing Zhu
Ming Ding
Wanlei Zhou
53
0
0
08 Mar 2025
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training
Tejaswini Medi
Steffen Jung
M. Keuper
AAML
44
3
0
30 Oct 2024
The Pursuit of Fairness in Artificial Intelligence Models: A Survey
Tahsin Alamgir Kheya
Mohamed Reda Bouadjenek
Sunil Aryal
36
8
0
26 Mar 2024
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Group-based Robustness: A General Framework for Customized Robustness in the Real World
Weiran Lin
Keane Lucas
Neo Eyal
Lujo Bauer
Michael K. Reiter
Mahmood Sharif
OOD
AAML
42
1
0
29 Jun 2023
Causality-Aided Trade-off Analysis for Machine Learning Fairness
Zhenlan Ji
Pingchuan Ma
Shuai Wang
Yanhui Li
FaML
34
7
0
22 May 2023
A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making Systems
Nicolò Pagan
Joachim Baumann
Ezzat Elokda
Giulia De Pasquale
S. Bolognani
Anikó Hannák
50
23
0
10 May 2023
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness
Zongxiong Chen
Jiahui Geng
Derui Zhu
Herbert Woisetschlaeger
Qing Li
Sonja Schimmler
Ruben Mayer
Chunming Rong
DD
26
9
0
05 May 2023
Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing
Walid Hariri
AI4MH
LM&MA
33
85
0
27 Mar 2023
PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities
Zhuqing Liu
Xin Zhang
Songtao Lu
Jia-Wei Liu
40
7
0
05 Mar 2023
UnbiasedNets: A Dataset Diversification Framework for Robustness Bias Alleviation in Neural Networks
Mahum Naseer
B. Prabakaran
Osman Hasan
Muhammad Shafique
24
7
0
24 Feb 2023
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition
Luke E. Richards
Edward Raff
Cynthia Matuszek
AAML
16
2
0
17 Feb 2023
Do Neural Networks Generalize from Self-Averaging Sub-classifiers in the Same Way As Adaptive Boosting?
Michael Sun
Peter Chatain
AI4CE
29
0
0
14 Feb 2023
Fairness Increases Adversarial Vulnerability
Cuong Tran
Keyu Zhu
Ferdinando Fioretto
Pascal Van Hentenryck
34
6
0
21 Nov 2022
Combating Health Misinformation in Social Media: Characterization, Detection, Intervention, and Open Issues
Canyu Chen
Haoran Wang
Matthew A. Shapiro
Yunyu Xiao
Fei Wang
Kai Shu
22
12
0
10 Nov 2022
Fairness-aware Regression Robust to Adversarial Attacks
Yulu Jin
Lifeng Lai
FaML
OOD
29
4
0
04 Nov 2022
Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting
Peng-Fei Hou
Jie Han
Xingyu Li
AAML
OOD
23
11
0
26 Oct 2022
Improving Robust Fairness via Balance Adversarial Training
Chunyu Sun
Chenye Xu
Chengyuan Yao
Siyuan Liang
Yichao Wu
Ding Liang
XiangLong Liu
Aishan Liu
23
11
0
15 Sep 2022
Class-Level Logit Perturbation
Mengyang Li
Fengguang Su
O. Wu
Tianjin University
AAML
31
3
0
13 Sep 2022
Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
Chulin Xie
Pin-Yu Chen
Qinbin Li
Arash Nourian
Ce Zhang
Bo Li
FedML
38
16
0
20 Jul 2022
Towards A Holistic View of Bias in Machine Learning: Bridging Algorithmic Fairness and Imbalanced Learning
Damien Dablain
Bartosz Krawczyk
Nitesh V. Chawla
FaML
26
19
0
13 Jul 2022
FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks
Kiarash Mohammadi
Aishwarya Sivaraman
G. Farnadi
25
5
0
01 Jun 2022
Pruning has a disparate impact on model accuracy
Cuong Tran
Ferdinando Fioretto
Jung-Eun Kim
Rakshit Naidu
41
38
0
26 May 2022
Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems
Mostafa M. Mohamed
Björn W. Schuller
19
5
0
02 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
104
16
0
31 Jan 2022
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Imbalanced Adversarial Training with Reweighting
Wentao Wang
Han Xu
Xiaorui Liu
Yaxin Li
B. Thuraisingham
Jiliang Tang
37
16
0
28 Jul 2021
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Stochastic-Shield: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs
Lorena Qendro
Sangwon Ha
R. D. Jong
Partha P. Maji
AAML
FedML
MQ
15
7
0
13 May 2021
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
678
0
19 Oct 2020
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
296
3,112
0
04 Nov 2016
1