Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.13214
Cited By
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
25 December 2021
Haibin Zheng
Zhiqing Chen
Tianyu Du
Xuhong Zhang
Yao Cheng
S. Ji
Jingyi Wang
Yue Yu
Jinyin Chen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification"
10 / 10 papers shown
Title
Testing Individual Fairness in Graph Neural Networks
Roya Nasiri
24
0
0
25 Apr 2025
FairSense: Long-Term Fairness Analysis of ML-Enabled Systems
Yining She
Sumon Biswas
Christian Kastner
Eunsuk Kang
45
0
0
03 Jan 2025
MAFT: Efficient Model-Agnostic Fairness Testing for Deep Neural Networks via Zero-Order Gradient Search
Zhaohui Wang
Min Zhang
Jingran Yang
Bojie Shao
Min Zhang
48
4
0
31 Dec 2024
Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Yisong Xiao
Aishan Liu
Tianlin Li
Xianglong Liu
22
26
0
19 May 2023
Towards Understanding Fairness and its Composition in Ensemble Machine Learning
Usman Gohar
Sumon Biswas
Hridesh Rajan
FaML
FedML
13
24
0
08 Dec 2022
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Haibin Zheng
Haiyang Xiong
Jinyin Chen
Hao-Shang Ma
Guohan Huang
47
28
0
25 Oct 2022
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Jinyin Chen
Chengyu Jia
Haibin Zheng
Ruoxi Chen
Chenbo Fu
AAML
22
10
0
17 Jun 2022
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps
Yujin Huang
Han Hu
Chunyang Chen
AAML
FedML
74
33
0
12 Jan 2021
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,842
0
08 Jul 2016
1