ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXivPDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 315 papers shown
Title
Group-Fair Online Allocation in Continuous Time
Group-Fair Online Allocation in Continuous Time
Semih Cayci
Swati Gupta
A. Eryilmaz
FaML
24
19
0
11 Jun 2020
Fair Bayesian Optimization
Fair Bayesian Optimization
Valerio Perrone
Michele Donini
Muhammad Bilal Zafar
Robin Schmucker
K. Kenthapadi
Cédric Archambeau
FaML
27
84
0
09 Jun 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
44
157
0
27 May 2020
Studying the Transfer of Biases from Programmers to Programs
Studying the Transfer of Biases from Programmers to Programs
Christian Johansen
Tore Pedersen
Johanna Johansen
6
7
0
17 May 2020
Mitigating Gender Bias in Machine Learning Data Sets
Mitigating Gender Bias in Machine Learning Data Sets
Susan Leavy
G. Meaney
Karen Wade
Derek Greene
FaML
21
36
0
14 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
51
42
0
04 May 2020
On the Relationships Between the Grammatical Genders of Inanimate Nouns
  and Their Co-Occurring Adjectives and Verbs
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
Adina Williams
Ryan Cotterell
Lawrence Wolf-Sonkin
Damián E. Blasi
Hanna M. Wallach
34
18
0
03 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Social Biases in NLP Models as Barriers for Persons with Disabilities
Social Biases in NLP Models as Barriers for Persons with Disabilities
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
13
302
0
02 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
22
122
0
01 May 2020
Do Neural Ranking Models Intensify Gender Bias?
Do Neural Ranking Models Intensify Gender Bias?
Navid Rekabsaz
Markus Schedl
13
57
0
01 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
42
207
0
01 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
37
957
0
20 Apr 2020
Automatically Characterizing Targeted Information Operations Through
  Biases Present in Discourse on Twitter
Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter
Autumn Toney
Akshat Pandey
W. Guo
David A. Broniatowski
Aylin Caliskan
25
3
0
18 Apr 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
15
368
0
16 Apr 2020
Machine learning as a model for cultural learning: Teaching an algorithm
  what it means to be fat
Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat
Alina Arseniev-Koehler
J. Foster
43
46
0
24 Mar 2020
Joint Multiclass Debiasing of Word Embeddings
Joint Multiclass Debiasing of Word Embeddings
Radovan Popović
Florian Lemmerich
M. Strohmaier
FaML
24
6
0
09 Mar 2020
Text-based inference of moral sentiment change
Text-based inference of moral sentiment change
Jing Yi Xie
Renato Ferreira Pinto Junior
Graeme Hirst
Yang Xu
6
32
0
20 Jan 2020
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
12
232
0
17 Jan 2020
Stereotypical Bias Removal for Hate Speech Detection Task using
  Knowledge-based Generalizations
Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations
Pinkesh Badjatiya
Manish Gupta
Vasudeva Varma
9
105
0
15 Jan 2020
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Debjani Saha
Candice Schumann
Duncan C. McElfresh
John P. Dickerson
Michelle L. Mazurek
Michael Carl Tschantz
FaML
24
16
0
17 Dec 2019
Towards Fairness in Visual Recognition: Effective Strategies for Bias
  Mitigation
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
Zeyu Wang
Klint Qinami
Yannis Karakozis
Kyle Genova
P. Nair
Kenji Hata
Olga Russakovsky
32
355
0
26 Nov 2019
Towards Understanding Gender Bias in Relation Extraction
Towards Understanding Gender Bias in Relation Extraction
Andrew Gaut
Tony Sun
Shirlyn Tang
Yuxin Huang
Jing Qian
...
Jieyu Zhao
Diba Mirza
E. Belding
Kai-Wei Chang
William Yang Wang
FaML
33
40
0
09 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
21
223
0
04 Nov 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
31
141
0
30 Oct 2019
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Vinodkumar Prabhakaran
Ben Hutchinson
Margaret Mitchell
19
117
0
09 Oct 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
112
21
0
21 Sep 2019
A General Framework for Implicit and Explicit Debiasing of
  Distributional Word Vector Spaces
A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
Anne Lauscher
Goran Glavas
Simone Paolo Ponzetto
Ivan Vulić
29
62
0
13 Sep 2019
Gender Representation in French Broadcast Corpora and Its Impact on ASR
  Performance
Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance
Mahault Garnerin
Solange Rossato
Laurent Besacier
15
51
0
23 Aug 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
326
4,223
0
23 Aug 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
27
46
0
24 Jun 2019
Language Modelling Makes Sense: Propagating Representations through
  WordNet for Full-Coverage Word Sense Disambiguation
Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation
Daniel Loureiro
A. Jorge
11
138
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
38
542
0
21 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
63
445
0
18 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
19
33
0
14 Jun 2019
Training Temporal Word Embeddings with a Compass
Training Temporal Word Embeddings with a Compass
Valerio Di Carlo
Federico Bianchi
M. Palmonari
31
66
0
05 Jun 2019
Characterizing Bias in Classifiers using Generative Models
Characterizing Bias in Classifiers using Generative Models
Daniel J. McDuff
Shuang Ma
Yale Song
Ashish Kapoor
31
47
0
30 May 2019
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Thomas Davidson
Debasmita Bhattacharya
Ingmar Weber
25
451
0
29 May 2019
Fairness-Aware Ranking in Search & Recommendation Systems with
  Application to LinkedIn Talent Search
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
S. Geyik
Stuart Ambler
K. Kenthapadi
16
376
0
30 Apr 2019
Are We Consistently Biased? Multidimensional Analysis of Biases in
  Distributional Word Vectors
Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors
Anne Lauscher
Goran Glavas
22
55
0
26 Apr 2019
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Christine Basta
Marta R. Costa-jussá
Noe Casas
16
189
0
18 Apr 2019
What's in a Name? Reducing Bias in Bios without Access to Protected
  Attributes
What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Alexey Romanov
Maria De-Arteaga
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Anna Rumshisky
Adam Tauman Kalai
8
80
0
10 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
46
323
0
05 Apr 2019
On Measuring Social Biases in Sentence Encoders
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
42
591
0
25 Mar 2019
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
Hila Gonen
Yoav Goldberg
14
567
0
09 Mar 2019
Copying Machine Learning Classifiers
Copying Machine Learning Classifiers
Irene Unceta
Jordi Nin
O. Pujol
14
18
0
05 Mar 2019
Implicit Diversity in Image Summarization
Implicit Diversity in Image Summarization
L. E. Celis
Vijay Keswani
21
33
0
29 Jan 2019
Equalizing Gender Biases in Neural Machine Translation with Word
  Embeddings Techniques
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques
Joel Escudé Font
Marta R. Costa-jussá
16
167
0
10 Jan 2019
What are the biases in my word embedding?
What are the biases in my word embedding?
Nathaniel Swinger
Maria De-Arteaga
IV NeilThomasHeffernan
Mark D. M. Leiserson
Adam Tauman Kalai
11
104
0
20 Dec 2018
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias
  in Deep Image Representations
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
Tianlu Wang
Jieyu Zhao
Mark Yatskar
Kai-Wei Chang
Vicente Ordonez
FaML
18
17
0
20 Nov 2018
Previous
1234567
Next