ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.08723
  4. Cited By
Exacerbating Algorithmic Bias through Fairness Attacks

Exacerbating Algorithmic Bias through Fairness Attacks

16 December 2020
Ninareh Mehrabi
Muhammad Naveed
Fred Morstatter
Aram Galstyan
    AAML
ArXivPDFHTML

Papers citing "Exacerbating Algorithmic Bias through Fairness Attacks"

44 / 44 papers shown
Title
FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization
FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization
Yucong Dai
Jie Ji
Xiaolong Ma
Yongkai Wu
51
0
0
29 Mar 2025
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers
Huan Tian
Guangsheng Zhang
Bo Liu
Tianqing Zhu
Ming Ding
Wanlei Zhou
53
0
0
08 Mar 2025
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers
Jiaqi Xue
Qian Lou
Mengxin Zheng
31
1
0
23 Oct 2024
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning
Jiashi Gao
Ziwei Wang
Xiangyu Zhao
Xin Yao
Xuetao Wei
25
0
0
09 Oct 2024
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in
  Federated Learning
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning
Syed Irfan Ali Meerza
Jian-Dong Liu
35
2
0
02 Oct 2024
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
36
3
0
29 Aug 2024
LayerMatch: Do Pseudo-labels Benefit All Layers?
LayerMatch: Do Pseudo-labels Benefit All Layers?
Chaoqi Liang
Guanglei Yang
Lifeng Qiao
Zitong Huang
Hongliang Yan
Yunchao Wei
W. Zuo
46
0
0
20 Jun 2024
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks
  via Node Injections
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
Zihan Luo
Hong Huang
Yongkang Zhou
Jiping Zhang
Nuo Chen
37
1
0
05 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
48
10
0
29 May 2024
Safety in Graph Machine Learning: Threats and Safeguards
Safety in Graph Machine Learning: Threats and Safeguards
Song Wang
Yushun Dong
Binchi Zhang
Zihan Chen
Xingbo Fu
Yinhan He
Cong Shen
Chuxu Zhang
Nitesh V. Chawla
Jundong Li
45
7
0
17 May 2024
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models
Anshuman Chhabra
Bo Li
Jian Chen
Prasant Mohapatra
Hongfu Liu
TDI
34
0
0
06 May 2024
Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An
  Adversarial Perspective
Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An Adversarial Perspective
Xinjian Luo
Yangfan Jiang
Fei Wei
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
DiffM
38
4
0
28 Feb 2024
The Effect of Data Poisoning on Counterfactual Explanations
The Effect of Data Poisoning on Counterfactual Explanations
André Artelt
Shubham Sharma
Freddy Lecue
Barbara Hammer
18
1
0
13 Feb 2024
TrojFair: Trojan Fairness Attacks
TrojFair: Trojan Fairness Attacks
Meng Zheng
Jiaqi Xue
Yi Sheng
Lei Yang
Qian Lou
Lei Jiang
10
3
0
16 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Survey on AI Ethics: A Socio-technical Perspective
Survey on AI Ethics: A Socio-technical Perspective
Dave Mbiazi
Meghana Bhange
Maryam Babaei
Ivaxi Sheth
Patrik Joslin Kenfack
23
4
0
28 Nov 2023
Deceptive Fairness Attacks on Graphs via Meta Learning
Deceptive Fairness Attacks on Graphs via Meta Learning
Jian Kang
Yinglong Xia
Ross Maciejewski
Jiebo Luo
Hanghang Tong
34
4
0
24 Oct 2023
Adversarial Attacks on Fairness of Graph Neural Networks
Adversarial Attacks on Fairness of Graph Neural Networks
Binchi Zhang
Yushun Dong
Chen Chen
Yada Zhu
Minnan Luo
Jundong Li
38
3
0
20 Oct 2023
Towards Poisoning Fair Representations
Towards Poisoning Fair Representations
Tianci Liu
Haoyu Wang
Feijie Wu
Hengtong Zhang
Pan Li
Lu Su
Jing Gao
AAML
30
2
0
28 Sep 2023
Adversarial attacks and defenses in explainable artificial intelligence:
  A survey
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki
P. Biecek
AAML
42
63
0
06 Jun 2023
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
Eugenia Iofinova
Alexandra Peste
Dan Alistarh
23
9
0
25 Apr 2023
To be Robust and to be Fair: Aligning Fairness with Robustness
To be Robust and to be Fair: Aligning Fairness with Robustness
Junyi Chai
Xiaoqian Wang
47
2
0
31 Mar 2023
Improving Fair Training under Correlation Shifts
Improving Fair Training under Correlation Shifts
Yuji Roh
Kangwook Lee
Steven Euijong Whang
Changho Suh
27
17
0
05 Feb 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
21
4
0
18 Jan 2023
A Survey on Preserving Fairness Guarantees in Changing Environments
A Survey on Preserving Fairness Guarantees in Changing Environments
Ainhize Barrainkua
Paula Gordaliza
Jose A. Lozano
Novi Quadrianto
FaML
27
3
0
14 Nov 2022
Fairness-aware Regression Robust to Adversarial Attacks
Fairness-aware Regression Robust to Adversarial Attacks
Yulu Jin
Lifeng Lai
FaML
OOD
18
4
0
04 Nov 2022
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
Anshuman Chhabra
Peizhao Li
P. Mohapatra
Hongfu Liu
OOD
21
22
0
04 Oct 2022
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
  Neural Networks
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
Hussain Hussain
Meng Cao
Sandipan Sikdar
D. Helic
Elisabeth Lex
M. Strohmaier
Roman Kern
19
14
0
13 Sep 2022
How Robust is Your Fairness? Evaluating and Sustaining Fairness under
  Unseen Distribution Shifts
How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
Haotao Wang
Junyuan Hong
Jiayu Zhou
Zhangyang Wang
OOD
56
11
0
04 Jul 2022
How Biased are Your Features?: Computing Fairness Influence Functions
  with Global Sensitivity Analysis
How Biased are Your Features?: Computing Fairness Influence Functions with Global Sensitivity Analysis
Bishwamittra Ghosh
D. Basu
Kuldeep S. Meel
FaML
4
9
0
01 Jun 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
19
7
0
05 May 2022
Robust Conversational Agents against Imperceptible Toxicity Triggers
Robust Conversational Agents against Imperceptible Toxicity Triggers
Ninareh Mehrabi
Ahmad Beirami
Fred Morstatter
Aram Galstyan
AAML
18
32
0
05 May 2022
Fairness in Graph Mining: A Survey
Fairness in Graph Mining: A Survey
Yushun Dong
Jing Ma
Song Wang
Chen Chen
Jundong Li
FaML
34
112
0
21 Apr 2022
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
  Robustness, Fairness, and Explainability
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Enyan Dai
Tianxiang Zhao
Huaisheng Zhu
Jun Xu
Zhimeng Guo
Hui Liu
Jiliang Tang
Suhang Wang
29
132
0
18 Apr 2022
Breaking Fair Binary Classification with Optimal Flipping Attacks
Breaking Fair Binary Classification with Optimal Flipping Attacks
Changhun Jo
Jy-yong Sohn
Kangwook Lee
FaML
17
7
0
12 Apr 2022
Towards Multi-Objective Statistically Fair Federated Learning
Towards Multi-Objective Statistically Fair Federated Learning
Ninareh Mehrabi
Cyprien de Lichy
John McKay
C. He
William Campbell
FedML
27
9
0
24 Jan 2022
Interpretable Data-Based Explanations for Fairness Debugging
Interpretable Data-Based Explanations for Fairness Debugging
Romila Pradhan
Jiongli Zhu
Boris Glavic
Babak Salimi
16
53
0
17 Dec 2021
Fairness Degrading Adversarial Attacks Against Clustering Algorithms
Fairness Degrading Adversarial Attacks Against Clustering Algorithms
Anshuman Chhabra
Adish Singla
P. Mohapatra
13
7
0
22 Oct 2021
Poisoning Attacks on Fair Machine Learning
Poisoning Attacks on Fair Machine Learning
Minh-Hao Van
Wei Du
Xintao Wu
Aidong Lu
AAML
6
23
0
17 Oct 2021
Machine Learning for Fraud Detection in E-Commerce: A Research Agenda
Machine Learning for Fraud Detection in E-Commerce: A Research Agenda
Niek Tax
Kees Jan de Vries
Mathijs de Jong
Nikoleta Dosoula
Bram van den Akker
Jon Smith
Olivier Thuong
Lucas Bernardi
6
19
0
05 Jul 2021
FLEA: Provably Robust Fair Multisource Learning from Unreliable Training
  Data
FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data
Eugenia Iofinova
Nikola Konstantinov
Christoph H. Lampert
FaML
28
0
0
22 Jun 2021
Fairness-Aware PAC Learning from Corrupted Data
Fairness-Aware PAC Learning from Corrupted Data
Nikola Konstantinov
Christoph H. Lampert
11
17
0
11 Feb 2021
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
323
4,203
0
23 Aug 2019
Fairness Constraints: Mechanisms for Fair Classification
Fairness Constraints: Mechanisms for Fair Classification
Muhammad Bilal Zafar
Isabel Valera
Manuel Gomez Rodriguez
Krishna P. Gummadi
FaML
114
49
0
19 Jul 2015
1