ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXivPDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 315 papers shown
Title
Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers
Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers
Kenny Peng
Arunesh Mathur
Arvind Narayanan
99
93
0
06 Aug 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
38
8
0
22 Jul 2021
Theoretical foundations and limits of word embeddings: what types of
  meaning can they capture?
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?
Alina Arseniev-Koehler
33
19
0
22 Jul 2021
A Survey on Bias in Visual Datasets
A Survey on Bias in Visual Datasets
Simone Fabbrizzi
Symeon Papadopoulos
Eirini Ntoutsi
Y. Kompatsiaris
132
121
0
16 Jul 2021
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Paul Pu Liang
Yiwei Lyu
Xiang Fan
Zetian Wu
Yun Cheng
...
Peter Wu
Michelle A. Lee
Yuke Zhu
Ruslan Salakhutdinov
Louis-Philippe Morency
VLM
32
159
0
15 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
42
122
0
21 Jun 2021
Understanding and Evaluating Racial Biases in Image Captioning
Understanding and Evaluating Racial Biases in Image Captioning
Dora Zhao
Angelina Wang
Olga Russakovsky
30
134
0
16 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
19
37
0
04 Jun 2021
Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism
  in the Design and Perception of Conversational Assistants
Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants
Gavin Abercrombie
Amanda Cercas Curry
Mugdha Pandya
Verena Rieser
36
53
0
04 Jun 2021
CogView: Mastering Text-to-Image Generation via Transformers
CogView: Mastering Text-to-Image Generation via Transformers
Ming Ding
Zhuoyi Yang
Wenyi Hong
Wendi Zheng
Chang Zhou
...
Junyang Lin
Xu Zou
Zhou Shao
Hongxia Yang
Jie Tang
ViT
VLM
24
762
0
26 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
19
19
0
12 May 2021
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
25
41
0
12 May 2021
Addressing "Documentation Debt" in Machine Learning Research: A
  Retrospective Datasheet for BookCorpus
Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus
Jack Bandy
Nicholas Vincent
23
57
0
11 May 2021
Societal Biases in Retrieved Contents: Measurement Framework and
  Adversarial Mitigation for BERT Rankers
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
Navid Rekabsaz
Simone Kopeinik
Markus Schedl
24
61
0
28 Apr 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
  Models
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Tejas Srinivasan
Yonatan Bisk
VLM
26
55
0
18 Apr 2021
On the Interpretability and Significance of Bias Metrics in Texts: a
  PMI-based Approach
On the Interpretability and Significance of Bias Metrics in Texts: a PMI-based Approach
Francisco Valentini
Germán Rosati
Damián E. Blasi
D. Slezak
Edgar Altszyler
22
3
0
13 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
13
0
0
13 Apr 2021
Large Pre-trained Language Models Contain Human-like Biases of What is
  Right and Wrong to Do
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
33
281
0
08 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional
  Biases Encoded in Word Embeddings
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
29
26
0
05 Mar 2021
Measuring Model Biases in the Absence of Ground Truth
Measuring Model Biases in the Absence of Ground Truth
Osman Aka
Ken Burke
Alex Bauerle
Christina Greer
Margaret Mitchell
23
34
0
05 Mar 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
68
415
0
15 Feb 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
22
179
0
08 Feb 2021
Low-skilled Occupations Face the Highest Upskilling Pressure
Low-skilled Occupations Face the Highest Upskilling Pressure
Di Tong
Lingfei Wu
James Allen Evans
32
1
0
27 Jan 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
38
39
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
138
0
23 Jan 2021
Censorship of Online Encyclopedias: Implications for NLP Models
Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang
Margaret E. Roberts
21
16
0
22 Jan 2021
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting
  Data Scientists in Training Fair Models
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models
Brittany Johnson
Jesse Bartola
Rico Angell
Katherine Keith
Sam Witty
S. Giguere
Yuriy Brun
FaML
22
18
0
17 Dec 2020
Towards Neural Programming Interfaces
Towards Neural Programming Interfaces
Zachary Brown
Nathaniel R. Robinson
David Wingate
Nancy Fulda
AI4CE
20
5
0
10 Dec 2020
Cross-Loss Influence Functions to Explain Deep Network Representations
Cross-Loss Influence Functions to Explain Deep Network Representations
Andrew Silva
Rohit Chopra
Matthew C. Gombolay
TDI
21
15
0
03 Dec 2020
Argument from Old Man's View: Assessing Social Bias in Argumentation
Argument from Old Man's View: Assessing Social Bias in Argumentation
Maximilian Spliethover
Henning Wachsmuth
8
20
0
24 Nov 2020
Underspecification Presents Challenges for Credibility in Modern Machine
  Learning
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
77
670
0
06 Nov 2020
Semantic and Relational Spaces in Science of Science: Deep Learning
  Models for Article Vectorisation
Semantic and Relational Spaces in Science of Science: Deep Learning Models for Article Vectorisation
Diego Kozlowski
Jennifer Dusdal
Jun Pang
A. Zilian
18
13
0
05 Nov 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
27
156
0
28 Oct 2020
Towards Ethics by Design in Online Abusive Content Detection
Towards Ethics by Design in Online Abusive Content Detection
S. Kiritchenko
I. Nejadgholi
21
13
0
28 Oct 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
28
123
0
27 Oct 2020
Rethinking embedding coupling in pre-trained language models
Rethinking embedding coupling in pre-trained language models
Hyung Won Chung
Thibault Févry
Henry Tsai
Melvin Johnson
Sebastian Ruder
95
142
0
24 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
20
645
0
30 Sep 2020
Why resampling outperforms reweighting for correcting sampling bias with
  stochastic gradients
Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients
Jing An
Lexing Ying
Yuhua Zhu
37
38
0
28 Sep 2020
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Francisco Vargas
Ryan Cotterell
35
30
0
20 Sep 2020
Alfie: An Interactive Robot with a Moral Compass
Alfie: An Interactive Robot with a Moral Compass
Cigdem Turan
P. Schramowski
Constantin Rothkopf
Kristian Kersting
LM&Ro
11
0
0
11 Sep 2020
Investigating Gender Bias in BERT
Investigating Gender Bias in BERT
Rishabh Bhardwaj
Navonil Majumder
Soujanya Poria
33
106
0
10 Sep 2020
Assessing Demographic Bias in Named Entity Recognition
Assessing Demographic Bias in Named Entity Recognition
Shubhanshu Mishra
Sijun He
Luca Belli
12
46
0
08 Aug 2020
Discovering and Categorising Language Biases in Reddit
Discovering and Categorising Language Biases in Reddit
Xavier Ferrer Aran
Tom van Nuenen
Jose Such
Natalia Criado
8
49
0
06 Aug 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
23
38
0
09 Jul 2020
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
Yuhao Du
K. Joseph
14
4
0
20 Jun 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
26
96
0
19 Jun 2020
Previous
1234567
Next