ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.10153
  4. Cited By
On the Independence of Association Bias and Empirical Fairness in
  Language Models

On the Independence of Association Bias and Empirical Fairness in Language Models

20 April 2023
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
ArXiv (abs)PDFHTML

Papers citing "On the Independence of Association Bias and Empirical Fairness in Language Models"

18 / 18 papers shown
Title
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency,
  Syntax, and Semantics
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Aylin Caliskan
Pimparkar Parth Ajay
Tessa E. S. Charlesworth
Robert Wolfe
M. Banaji
CVBMFaML
79
51
0
07 Jun 2022
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
59
24
0
14 Dec 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavaš
114
122
0
08 Sep 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
70
110
0
28 Jun 2021
A Clarification of the Nuances in the Fairness Metrics Landscape
A Clarification of the Nuances in the Fairness Metrics Landscape
Alessandro Castelnovo
Riccardo Crupi
Greta Greco
D. Regoli
Ilaria Giuseppina Penco
A. Cosentini
FaML
53
189
0
01 Jun 2021
Does enforcing fairness mitigate biases caused by subpopulation shift?
Does enforcing fairness mitigate biases caused by subpopulation shift?
Subha Maity
Debarghya Mukherjee
Mikhail Yurochkin
Yuekai Sun
140
24
0
06 Nov 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
74
125
0
27 Oct 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
157
1,248
0
28 May 2020
Social Biases in NLP Models as Barriers for Persons with Disabilities
Social Biases in NLP Models as Barriers for Persons with Disabilities
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
73
313
0
02 May 2020
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual
  Lexical Semantic Similarity
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity
Ivan Vulić
Simon Baker
Edoardo Ponti
Ulla Petti
Ira Leviant
...
Eden Bar
Matt Malone
Thierry Poibeau
Roi Reichart
Anna Korhonen
63
83
0
10 Mar 2020
Measuring Social Biases in Grounded Vision and Language Embeddings
Measuring Social Biases in Grounded Vision and Language Embeddings
Candace Ross
Boris Katz
Andrei Barbu
84
65
0
20 Feb 2020
Predictive Biases in Natural Language Processing Models: A Conceptual
  Framework and Overview
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
Deven Santosh Shah
H. Andrew Schwartz
Dirk Hovy
AI4CE
104
260
0
09 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
97
228
0
04 Nov 2019
Does Gender Matter? Towards Fairness in Dialogue Systems
Does Gender Matter? Towards Fairness in Dialogue Systems
Haochen Liu
Jamell Dacon
Wenqi Fan
Hui Liu
Zitao Liu
Jiliang Tang
140
144
0
16 Oct 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
108
563
0
21 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
106
451
0
18 Jun 2019
Learning Gender-Neutral Word Embeddings
Learning Gender-Neutral Word Embeddings
Jieyu Zhao
Yichao Zhou
Zeyu Li
Wei Wang
Kai-Wei Chang
FaML
103
416
0
29 Aug 2018
Inherent Trade-Offs in the Fair Determination of Risk Scores
Inherent Trade-Offs in the Fair Determination of Risk Scores
Jon M. Kleinberg
S. Mullainathan
Manish Raghavan
FaML
122
1,776
0
19 Sep 2016
1