ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14050
  4. Cited By
Language (Technology) is Power: A Critical Survey of "Bias" in NLP

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

28 May 2020
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Language (Technology) is Power: A Critical Survey of "Bias" in NLP"

36 / 236 papers shown
Title
Process for Adapting Language Models to Society (PALMS) with
  Values-Targeted Datasets
Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets
Irene Solaiman
Christy Dennison
30
222
0
18 Jun 2021
Understanding and Evaluating Racial Biases in Image Captioning
Understanding and Evaluating Racial Biases in Image Captioning
Dora Zhao
Angelina Wang
Olga Russakovsky
24
134
0
16 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
19
37
0
04 Jun 2021
How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social
  Impact
How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact
Zhijing Jin
Geeticka Chauhan
Brian Tse
Mrinmaya Sachan
Rada Mihalcea
27
25
0
04 Jun 2021
Focus Attention: Promoting Faithfulness and Diversity in Summarization
Focus Attention: Promoting Faithfulness and Diversity in Summarization
Rahul Aralikatte
Shashi Narayan
Joshua Maynez
S. Rothe
Ryan T. McDonald
35
45
0
25 May 2021
Everyday algorithm auditing: Understanding the power of everyday users
  in surfacing harmful algorithmic behaviors
Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors
Hong Shen
Alicia DeVrio
Motahhare Eslami
Kenneth Holstein
MLAU
18
123
0
06 May 2021
Reliability Testing for Natural Language Processing Systems
Reliability Testing for Natural Language Processing Systems
Samson Tan
Chenyu You
K. Baxter
Araz Taeihagh
G. Bennett
Min-Yen Kan
15
38
0
06 May 2021
Societal Biases in Retrieved Contents: Measurement Framework and
  Adversarial Mitigation for BERT Rankers
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
Navid Rekabsaz
Simone Kopeinik
Markus Schedl
19
61
0
28 Apr 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
  Models
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Tejas Srinivasan
Yonatan Bisk
VLM
26
55
0
18 Apr 2021
Detoxifying Language Models Risks Marginalizing Minority Voices
Detoxifying Language Models Risks Marginalizing Minority Voices
Albert Xu
Eshaan Pathak
Eric Wallace
Suchin Gururangan
Maarten Sap
Dan Klein
18
121
0
13 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
13
0
0
13 Apr 2021
How to Write a Bias Statement: Recommendations for Submissions to the
  Workshop on Gender Bias in NLP
How to Write a Bias Statement: Recommendations for Submissions to the Workshop on Gender Bias in NLP
Christian Hardmeier
Marta R. Costa-jussá
Kellie Webster
Will Radford
Su Lin Blodgett
20
7
0
07 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELM
ALM
30
156
0
05 Apr 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
19
180
0
08 Feb 2021
Censorship of Online Encyclopedias: Implications for NLP Models
Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang
Margaret E. Roberts
21
16
0
22 Jan 2021
Cross-Loss Influence Functions to Explain Deep Network Representations
Cross-Loss Influence Functions to Explain Deep Network Representations
Andrew Silva
Rohit Chopra
Matthew C. Gombolay
TDI
21
15
0
03 Dec 2020
Modifying Memories in Transformer Models
Modifying Memories in Transformer Models
Chen Zhu
A. S. Rawat
Manzil Zaheer
Srinadh Bhojanapalli
Daliang Li
Felix X. Yu
Sanjiv Kumar
KELM
23
192
0
01 Dec 2020
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal
  Clinical NLP
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP
John Chen
Ian Berlot-Attwell
Safwan Hossain
Xindi Wang
Frank Rudzicz
FaML
29
7
0
19 Nov 2020
On the State of Social Media Data for Mental Health Research
On the State of Social Media Data for Mental Health Research
Keith Harrigian
Carlos Alejandro Aguirre
Mark Dredze
AI4MH
8
49
0
10 Nov 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
24
156
0
28 Oct 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
22
123
0
27 Oct 2020
Evaluating Gender Bias in Speech Translation
Evaluating Gender Bias in Speech Translation
Marta R. Costa-jussá
Christine Basta
Gerard I. Gállego
32
21
0
27 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
20
645
0
30 Sep 2020
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Kawin Ethayarajh
Dan Jurafsky
ELM
24
51
0
29 Sep 2020
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
  Models
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman
Suchin Gururangan
Maarten Sap
Yejin Choi
Noah A. Smith
34
1,130
0
24 Sep 2020
Critical Thinking for Language Models
Critical Thinking for Language Models
Gregor Betz
Christian Voigt
Kyle Richardson
SyDa
ReLM
LRM
AI4CE
20
35
0
15 Sep 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
41
40,023
0
28 May 2020
It's Morphin' Time! Combating Linguistic Discrimination with
  Inflectional Perturbations
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations
Samson Tan
Chenyu You
Min-Yen Kan
R. Socher
166
103
0
09 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
19
122
0
01 May 2020
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
10
232
0
17 Jan 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
27
205
0
10 Nov 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
112
21
0
21 Sep 2019
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
223
618
0
03 Sep 2019
Previous
12345