ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09451
  4. Cited By
Bias in Bios: A Case Study of Semantic Representation Bias in a
  High-Stakes Setting

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

27 January 2019
Maria De-Arteaga
Alexey Romanov
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Adam Tauman Kalai
ArXivPDFHTML

Papers citing "Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting"

50 / 110 papers shown
Title
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
57
46
0
23 Sep 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
30
125
0
27 Jul 2022
Algorithmic Fairness in Business Analytics: Directions for Research and
  Practice
Algorithmic Fairness in Business Analytics: Directions for Research and Practice
Maria De-Arteaga
Stefan Feuerriegel
M. Saar-Tsechansky
FaML
22
42
0
22 Jul 2022
Robustness Analysis of Video-Language Models Against Visual and Language
  Perturbations
Robustness Analysis of Video-Language Models Against Visual and Language Perturbations
Madeline Chantry Schiappa
Shruti Vyas
Hamid Palangi
Yogesh S Rawat
Vibhav Vineet
VLM
123
18
0
05 Jul 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a
  256GB Open-Source Legal Dataset
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
141
98
0
01 Jul 2022
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender
  Bias
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias
Yarden Tal
Inbal Magar
Roy Schwartz
19
34
0
20 Jun 2022
Conditional Supervised Contrastive Learning for Fair Text Classification
Conditional Supervised Contrastive Learning for Fair Text Classification
Jianfeng Chi
Will Shand
Yaodong Yu
Kai-Wei Chang
Han Zhao
Yuan Tian
FaML
54
14
0
23 May 2022
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in
  Human-AI Decision-Making
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Maximilian Nitsche
Niklas Kühl
Michael Vossing
34
56
0
10 May 2022
Optimising Equal Opportunity Fairness in Model Training
Optimising Equal Opportunity Fairness in Model Training
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
FaML
32
28
0
05 May 2022
Justice in Misinformation Detection Systems: An Analysis of Algorithms,
  Stakeholders, and Potential Harms
Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
Terrence Neumann
Maria De-Arteaga
S. Fazelpour
38
22
0
28 Apr 2022
LM-Debugger: An Interactive Tool for Inspection and Intervention in
  Transformer-Based Language Models
LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models
Mor Geva
Avi Caciularu
Guy Dar
Paul Roit
Shoval Sadde
Micah Shlain
Bar Tamir
Yoav Goldberg
KELM
35
27
0
26 Apr 2022
Analyzing Gender Representation in Multilingual Models
Analyzing Gender Representation in Multilingual Models
Hila Gonen
Shauli Ravfogel
Yoav Goldberg
25
11
0
20 Apr 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
31
18
0
14 Apr 2022
Mitigating Gender Bias in Distilled Language Models via Counterfactual
  Role Reversal
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Umang Gupta
Jwala Dhamala
Varun Kumar
Apurv Verma
Yada Pruksachatkun
Satyapriya Krishna
Rahul Gupta
Kai-Wei Chang
Greg Ver Steeg
Aram Galstyan
26
49
0
23 Mar 2022
Fairness constraint in Structural Econometrics and Application to fair
  estimation using Instrumental Variables
Fairness constraint in Structural Econometrics and Application to fair estimation using Instrumental Variables
S. Centorrino
J. Florens
Jean-Michel Loubes
FaML
20
1
0
16 Feb 2022
XAI for Transformers: Better Explanations through Conservative
  Propagation
XAI for Transformers: Better Explanations through Conservative Propagation
Ameen Ali
Thomas Schnake
Oliver Eberle
G. Montavon
Klaus-Robert Muller
Lior Wolf
FAtt
15
89
0
15 Feb 2022
Adaptive Sampling Strategies to Construct Equitable Training Datasets
Adaptive Sampling Strategies to Construct Equitable Training Datasets
William Cai
R. Encarnación
Bobbie Chern
S. Corbett-Davies
Miranda Bogen
Stevie Bergman
Sharad Goel
89
30
0
31 Jan 2022
Learning Fair Representations via Rate-Distortion Maximization
Learning Fair Representations via Rate-Distortion Maximization
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
FaML
16
14
0
31 Jan 2022
POTATO: exPlainable infOrmation exTrAcTion framewOrk
POTATO: exPlainable infOrmation exTrAcTion framewOrk
Adam Kovacs
Kinga Gémes
Eszter Iklódi
Gábor Recski
38
4
0
31 Jan 2022
Kernelized Concept Erasure
Kernelized Concept Erasure
Shauli Ravfogel
Francisco Vargas
Yoav Goldberg
Ryan Cotterell
29
32
0
28 Jan 2022
Linear Adversarial Concept Erasure
Linear Adversarial Concept Erasure
Shauli Ravfogel
Michael Twiton
Yoav Goldberg
Ryan Cotterell
KELM
84
57
0
28 Jan 2022
An External Stability Audit Framework to Test the Validity of
  Personality Prediction in AI Hiring
An External Stability Audit Framework to Test the Validity of Personality Prediction in AI Hiring
Alene K. Rhea
K. Markey
L. D’Arinzo
Hilke Schellmann
Mona Sloane
Paul Squires
Falaah Arif Khan
Julia Stoyanovich
MLAU
30
19
0
23 Jan 2022
A Causal Lens for Controllable Text Generation
A Causal Lens for Controllable Text Generation
Zhiting Hu
Erran L. Li
50
59
0
22 Jan 2022
Few-shot Learning with Multilingual Language Models
Few-shot Learning with Multilingual Language Models
Xi Lin
Todor Mihaylov
Mikel Artetxe
Tianlu Wang
Shuohui Chen
...
Luke Zettlemoyer
Zornitsa Kozareva
Mona T. Diab
Ves Stoyanov
Xian Li
BDL
ELM
LRM
64
287
0
20 Dec 2021
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
131
0
15 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
24
24
0
14 Dec 2021
Fairness for AUC via Feature Augmentation
Fairness for AUC via Feature Augmentation
H. Fong
Vineet Kumar
Anay Mehrotra
Nisheeth K. Vishnoi
36
10
0
24 Nov 2021
RadFusion: Benchmarking Performance and Fairness for Multimodal
  Pulmonary Embolism Detection from CT and EHR
RadFusion: Benchmarking Performance and Fairness for Multimodal Pulmonary Embolism Detection from CT and EHR
Yuyin Zhou
Shih-Cheng Huang
Jason Alan Fries
Alaa Youssef
T. Amrhein
...
Imon Banerjee
D. Rubin
Lei Xing
N. Shah
M. Lungren
32
43
0
23 Nov 2021
Fairness-aware Class Imbalanced Learning
Fairness-aware Class Imbalanced Learning
Shivashankar Subramanian
Afshin Rahimi
Timothy Baldwin
Trevor Cohn
Lea Frermann
FaML
109
28
0
21 Sep 2021
Evaluating Debiasing Techniques for Intersectional Biases
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
110
48
0
21 Sep 2021
Balancing out Bias: Achieving Fairness Through Balanced Training
Balancing out Bias: Achieving Fairness Through Balanced Training
Xudong Han
Timothy Baldwin
Trevor Cohn
26
39
0
16 Sep 2021
Attributing Fair Decisions with Attention Interventions
Attributing Fair Decisions with Attention Interventions
Ninareh Mehrabi
Umang Gupta
Fred Morstatter
Greg Ver Steeg
Aram Galstyan
32
21
0
08 Sep 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation
  in Language Technologies
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
Sunipa Dev
Masoud Monajatipoor
Anaelia Ovalle
Arjun Subramonian
J. M. Phillips
Kai-Wei Chang
39
165
0
27 Aug 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
36
80
0
07 Aug 2021
End-to-End Weak Supervision
End-to-End Weak Supervision
Salva Rühling Cachay
Benedikt Boecking
A. Dubrawski
NoLa
33
40
0
05 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
26
104
0
28 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
47
122
0
21 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
16
21
0
08 Jun 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
47
79
0
30 Apr 2021
Societal Biases in Retrieved Contents: Measurement Framework and
  Adversarial Mitigation for BERT Rankers
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
Navid Rekabsaz
Simone Kopeinik
Markus Schedl
24
60
0
28 Apr 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional
  Biases Encoded in Word Embeddings
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
34
26
0
05 Mar 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
49
95
0
02 Mar 2021
Problematic Machine Behavior: A Systematic Literature Review of
  Algorithm Audits
Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits
Jack Bandy
MLAU
21
112
0
03 Feb 2021
Diverse Adversaries for Mitigating Bias in Training
Diverse Adversaries for Mitigating Bias in Training
Xudong Han
Timothy Baldwin
Trevor Cohn
13
62
0
25 Jan 2021
Interactive Weak Supervision: Learning Useful Heuristics for Data
  Labeling
Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling
Benedikt Boecking
Willie Neiswanger
Eric Xing
A. Dubrawski
NoLa
OffRL
28
68
0
11 Dec 2020
Underspecification Presents Challenges for Credibility in Modern Machine
  Learning
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
77
671
0
06 Nov 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
35
156
0
28 Oct 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
51
157
0
27 May 2020
Social Biases in NLP Models as Barriers for Persons with Disabilities
Social Biases in NLP Models as Barriers for Persons with Disabilities
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
28
302
0
02 May 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
44
370
0
16 Apr 2020
Previous
123
Next