ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00813
  4. Cited By
Social Biases in NLP Models as Barriers for Persons with Disabilities

Social Biases in NLP Models as Barriers for Persons with Disabilities

2 May 2020
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
ArXivPDFHTML

Papers citing "Social Biases in NLP Models as Barriers for Persons with Disabilities"

13 / 163 papers shown
Title
Rethinking Search: Making Domain Experts out of Dilettantes
Rethinking Search: Making Domain Experts out of Dilettantes
Donald Metzler
Yi Tay
Dara Bahri
Marc Najork
LRM
40
46
0
05 May 2021
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
Kang Min Yoo
Dongju Park
Jaewook Kang
Sang-Woo Lee
Woomyeong Park
45
235
0
18 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
13
0
0
13 Apr 2021
From Toxicity in Online Comments to Incivility in American News: Proceed
  with Caution
From Toxicity in Online Comments to Incivility in American News: Proceed with Caution
A. Hede
Oshin Agarwal
L. Lu
Diana C. Mutz
A. Nenkova
18
10
0
06 Feb 2021
Re-imagining Algorithmic Fairness in India and Beyond
Re-imagining Algorithmic Fairness in India and Beyond
Nithya Sambasivan
Erin Arnesen
Ben Hutchinson
Tulsee Doshi
Vinodkumar Prabhakaran
FaML
17
174
0
25 Jan 2021
Data and its (dis)contents: A survey of dataset development and use in
  machine learning research
Data and its (dis)contents: A survey of dataset development and use in machine learning research
Amandalynne Paullada
Inioluwa Deborah Raji
Emily M. Bender
Emily L. Denton
A. Hanna
57
513
0
09 Dec 2020
Evaluating Bias In Dutch Word Embeddings
Evaluating Bias In Dutch Word Embeddings
Rodrigo Alejandro Chávez Mulsa
Gerasimos Spanakis
18
20
0
31 Oct 2020
Characterising Bias in Compressed Models
Characterising Bias in Compressed Models
Sara Hooker
Nyalleng Moorosi
Gregory Clark
Samy Bengio
Emily L. Denton
19
183
0
06 Oct 2020
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
  Models
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman
Suchin Gururangan
Maarten Sap
Yejin Choi
Noah A. Smith
58
1,134
0
24 Sep 2020
GeDi: Generative Discriminator Guided Sequence Generation
GeDi: Generative Discriminator Guided Sequence Generation
Ben Krause
Akhilesh Deepak Gotmare
Bryan McCann
N. Keskar
Chenyu You
R. Socher
Nazneen Rajani
56
391
0
14 Sep 2020
The State of AI Ethics Report (June 2020)
The State of AI Ethics Report (June 2020)
Abhishek Gupta
Camylle Lanteigne
Victoria Heath
M. B. Ganapini
Erick Galinkin
Allison Cohen
Tania De Gasperis
Mo Akif
Renjie Butalid
17
4
0
25 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
16
233
0
06 Jun 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
53
1,193
0
28 May 2020
Previous
1234