ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14050
  4. Cited By
Language (Technology) is Power: A Critical Survey of "Bias" in NLP

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

28 May 2020
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Language (Technology) is Power: A Critical Survey of "Bias" in NLP"

50 / 236 papers shown
Title
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark
  Datasets
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Tosin P. Adewumi
Isabella Sodergren
Lama Alkhaled
Sana Sabah Sabry
F. Liwicki
Marcus Liwicki
35
4
0
28 Jan 2023
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
29
10
0
01 Jan 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
POTATO: The Portable Text Annotation Tool
POTATO: The Portable Text Annotation Tool
Jiaxin Pei
Aparna Ananthasubramaniam
Xingyao Wang
Naitian Zhou
Jackson Sargent
Apostolos Dedeloudis
David Jurgens
VLM
27
58
0
16 Dec 2022
On Safe and Usable Chatbots for Promoting Voter Participation
On Safe and Usable Chatbots for Promoting Voter Participation
Bharath Muppasani
Vishal Pallagani
Kausik Lakkaraju
Shuge Lei
Biplav Srivastava
Brett W. Robertson
Andrea A. Hickerson
Vignesh Narayanan
32
2
0
16 Dec 2022
RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with
  Knowledge Grounding
RHO (ρρρ): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Ziwei Ji
Zihan Liu
Nayeon Lee
Tiezheng Yu
Bryan Wilie
Mini Zeng
Pascale Fung
HILM
23
53
0
03 Dec 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
22
6
0
15 Nov 2022
A Survey for Efficient Open Domain Question Answering
A Survey for Efficient Open Domain Question Answering
Qin Zhang
Shan Chen
Dongkuan Xu
Qingqing Cao
Xiaojun Chen
Trevor Cohn
Meng Fang
28
33
0
15 Nov 2022
SocioProbe: What, When, and Where Language Models Learn about
  Sociodemographics
SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Anne Lauscher
Federico Bianchi
Samuel R. Bowman
Dirk Hovy
29
7
0
08 Nov 2022
The Shared Task on Gender Rewriting
The Shared Task on Gender Rewriting
Bashar Alhafni
Nizar Habash
Houda Bouamor
Ossama Obeid
Sultan Alrowili
...
Mohamed Gabr
Abderrahmane Issam
Abdelrahim Qaddoumi
K. Vijay-Shanker
Mahmoud Zyate
34
1
0
22 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
How Hate Speech Varies by Target Identity: A Computational Analysis
How Hate Speech Varies by Target Identity: A Computational Analysis
Michael Miller Yoder
Lynnette Hui Xian Ng
D. W. Brown
Kathleen M. Carley
30
20
0
19 Oct 2022
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
40
3
0
19 Oct 2022
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
24
25
0
18 Oct 2022
Zero-Shot Learners for Natural Language Understanding via a Unified
  Multiple Choice Perspective
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective
Ping Yang
Junjie Wang
Ruyi Gan
Xinyu Zhu
Lin Zhang
Ziwei Wu
Xinyu Gao
Jiaxing Zhang
Tetsuya Sakai
BDL
22
25
0
16 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
29
2
0
14 Oct 2022
COFFEE: Counterfactual Fairness for Personalized Text Generation in
  Explainable Recommendation
COFFEE: Counterfactual Fairness for Personalized Text Generation in Explainable Recommendation
Nan Wang
Qifan Wang
Yi-Chia Wang
Maziar Sanjabi
Jingzhou Liu
Hamed Firooz
Hongning Wang
Shaoliang Nie
28
6
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
27
25
0
13 Oct 2022
Back to the Future: On Potential Histories in NLP
Back to the Future: On Potential Histories in NLP
Zeerak Talat
Anne Lauscher
AI4TS
30
4
0
12 Oct 2022
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype
  Content Model
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model
Ali Omrani
Brendan Kennedy
M. Atari
Morteza Dehghani
24
1
0
11 Oct 2022
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm
  Reduction
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
AJung Moon
Negar Rostamzadeh
...
N'Mah Yilla-Akbari
Jess Gallegos
A. Smart
Emilio Garcia
Gurleen Virk
36
188
0
11 Oct 2022
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
Angelie Kraft
Ricardo Usbeck
KELM
32
9
0
07 Oct 2022
A Human Rights-Based Approach to Responsible AI
A Human Rights-Based Approach to Responsible AI
Vinodkumar Prabhakaran
Margaret Mitchell
Timnit Gebru
Iason Gabriel
41
36
0
06 Oct 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
227
506
0
28 Sep 2022
Re-contextualizing Fairness in NLP: The Case of India
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
Sunipa Dev
Partha P. Talukdar
Shachi Dave
Vinodkumar Prabhakaran
16
54
0
25 Sep 2022
Controllable Text Generation for Open-Domain Creativity and Fairness
Controllable Text Generation for Open-Domain Creativity and Fairness
Nanyun Peng
23
6
0
24 Sep 2022
A Review of Challenges in Machine Learning based Automated Hate Speech
  Detection
A Review of Challenges in Machine Learning based Automated Hate Speech Detection
Abhishek Velankar
H. Patil
Raviraj Joshi
32
8
0
12 Sep 2022
Lost in Translation: Reimagining the Machine Learning Life Cycle in
  Education
Lost in Translation: Reimagining the Machine Learning Life Cycle in Education
Lydia T. Liu
Serena Wang
Tolani A. Britton
Rediet Abebe
AI4Ed
21
1
0
08 Sep 2022
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem
  Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und
  Sprachtechnologie
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und Sprachtechnologie
Sabrina Burtscher
Katta Spiel
Lukas Daniel Klausner
Manuel Lardelli
Dagmar Gromann
21
7
0
06 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
24
5
0
29 Aug 2022
Using Large Language Models to Simulate Multiple Humans and Replicate
  Human Subject Studies
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
Gati Aher
RosaI. Arriaga
Adam Tauman Kalai
59
349
0
18 Aug 2022
Visual Comparison of Language Model Adaptation
Visual Comparison of Language Model Adaptation
Rita Sevastjanova
E. Cakmak
Shauli Ravfogel
Ryan Cotterell
Mennatallah El-Assady
VLM
44
16
0
17 Aug 2022
Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language
  Model Erlangshen with Propensity-Corrected Loss
Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language Model Erlangshen with Propensity-Corrected Loss
Junjie Wang
Yuxiang Zhang
Ping Yang
Ruyi Gan
25
2
0
05 Aug 2022
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq
  Model
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Saleh Soltan
Shankar Ananthakrishnan
Jack G. M. FitzGerald
Rahul Gupta
Wael Hamza
...
Mukund Sridhar
Fabian Triefenbach
Apurv Verma
Gokhan Tur
Premkumar Natarajan
54
82
0
02 Aug 2022
On the Limitations of Sociodemographic Adaptation with Transformers
On the Limitations of Sociodemographic Adaptation with Transformers
Chia-Chien Hung
Anne Lauscher
Dirk Hovy
Simone Paolo Ponzetto
Goran Glavavs
27
0
0
01 Aug 2022
A Hazard Analysis Framework for Code Synthesis Large Language Models
A Hazard Analysis Framework for Code Synthesis Large Language Models
Heidy Khlaaf
Pamela Mishkin
Joshua Achiam
Gretchen Krueger
Miles Brundage
ELM
17
28
0
25 Jul 2022
BERTIN: Efficient Pre-Training of a Spanish Language Model using
  Perplexity Sampling
BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
Javier de la Rosa
E. G. Ponferrada
Paulo Villegas
Pablo González de Prado Salas
Manu Romero
María Grandury
35
95
0
14 Jul 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
32
3
0
14 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
  Language Models
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
21
8
0
23 Jun 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
30
49
0
16 Jun 2022
Detecting Harmful Online Conversational Content towards LGBTQIA+
  Individuals
Detecting Harmful Online Conversational Content towards LGBTQIA+ Individuals
Jamell Dacon
Harry Shomer
Shaylynn Crum-Dacon
Jiliang Tang
24
8
0
15 Jun 2022
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Divyansh Kaushik
Zachary Chase Lipton
A. London
25
2
0
08 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
25
50
0
01 Jun 2022
Toward Understanding Bias Correlations for Mitigation in NLP
Toward Understanding Bias Correlations for Mitigation in NLP
Lu Cheng
Suyu Ge
Huan Liu
36
8
0
24 May 2022
Conditional Supervised Contrastive Learning for Fair Text Classification
Conditional Supervised Contrastive Learning for Fair Text Classification
Jianfeng Chi
Will Shand
Yaodong Yu
Kai-Wei Chang
Han Zhao
Yuan Tian
FaML
46
14
0
23 May 2022
KOLD: Korean Offensive Language Dataset
KOLD: Korean Offensive Language Dataset
Young-kuk Jeong
Juhyun Oh
Jaimeen Ahn
Jongwon Lee
Jihyung Mon
Sungjoon Park
Alice H. Oh
57
25
0
23 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and
  Their Implications
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications
Kaitlyn Zhou
Su Lin Blodgett
Adam Trischler
Hal Daumé
Kaheer Suleman
Alexandra Olteanu
ELM
99
26
0
13 May 2022
Previous
12345
Next