ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00813
  4. Cited By
Social Biases in NLP Models as Barriers for Persons with Disabilities

Social Biases in NLP Models as Barriers for Persons with Disabilities

2 May 2020
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
ArXivPDFHTML

Papers citing "Social Biases in NLP Models as Barriers for Persons with Disabilities"

50 / 163 papers shown
Title
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
Heng Chang
Zhiting Fan
Ruizhe Chen
Xiaotang Gai
Luqi Gong
Yan Zhang
Zuozhu Liu
LLMSV
40
1
0
20 Apr 2025
AgentRxiv: Towards Collaborative Autonomous Research
AgentRxiv: Towards Collaborative Autonomous Research
Samuel Schmidgall
Michael Moor
74
4
0
23 Mar 2025
BiasConnect: Investigating Bias Interactions in Text-to-Image Models
Pushkar Shukla
Aditya Chinchure
Emily Diana
A. Tolbert
K. Hosanagar
Vineeth N. Balasubramanian
Leonid Sigal
Matthew A. Turk
51
0
0
12 Mar 2025
Fair Text Classification via Transferable Representations
Thibaud Leteno
Michael Perrot
Charlotte Laclau
Antoine Gourru
Christophe Gravier
FaML
88
0
0
10 Mar 2025
Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation
Lucie-Aimée Kaffee
Pepa Atanasova
Anna Rogers
44
0
0
19 Feb 2025
Understanding the Effects of Human-written Paraphrases in LLM-generated
  Text Detection
Understanding the Effects of Human-written Paraphrases in LLM-generated Text Detection
Hiu Ting Lau
Arkaitz Zubiaga
DeLMO
47
1
0
06 Nov 2024
AUTALIC: A Dataset for Anti-AUTistic Ableist Language In Context
AUTALIC: A Dataset for Anti-AUTistic Ableist Language In Context
Naba Rizvi
Harper Strickland
Daniel Gitelman
Tristan Cooper
Alexis Morales-Flores
...
Haaset Owens
Saleha Ahmedi
Isha Khirwadkar
Imani Munyaka
Nedjma Ousidhoum
37
0
0
21 Oct 2024
Speciesism in Natural Language Processing Research
Speciesism in Natural Language Processing Research
Masashi Takeshita
Rafal Rzepka
24
1
0
18 Oct 2024
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Robert D Morabito
Sangmitra Madhusudan
Tyler McDonald
Ali Emami
31
0
0
20 Sep 2024
Identity-related Speech Suppression in Generative AI Content Moderation
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
26
0
0
09 Sep 2024
Towards "Differential AI Psychology" and in-context Value-driven
  Statement Alignment with Moral Foundations Theory
Towards "Differential AI Psychology" and in-context Value-driven Statement Alignment with Moral Foundations Theory
Simon Münker
SyDa
32
0
0
21 Aug 2024
SMART-TBI: Design and Evaluation of the Social Media Accessibility and
  Rehabilitation Toolkit for Users with Traumatic Brain Injury
SMART-TBI: Design and Evaluation of the Social Media Accessibility and Rehabilitation Toolkit for Users with Traumatic Brain Injury
Yaxin Hu
Hajin Lim
Lisa Kakonge
Jade T. Mitchell
H. L. Johnson
Lyn Turkstra
Melissa C. Duff
Catalina L. Toma
Bilge Mutlu
26
0
0
19 Aug 2024
Downstream bias mitigation is all you need
Downstream bias mitigation is all you need
Arkadeep Baksi
Rahul Singh
Tarun Joshi
AI4CE
24
0
0
01 Aug 2024
Understanding the Interplay of Scale, Data, and Bias in Language Models:
  A Case Study with BERT
Understanding the Interplay of Scale, Data, and Bias in Language Models: A Case Study with BERT
Muhammad Ali
Swetasudha Panda
Qinlan Shen
Michael Wick
Ari Kobren
MILM
42
3
0
25 Jul 2024
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine
  Studies
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies
Alina Leidinger
Richard Rogers
34
5
0
16 Jul 2024
fairBERTs: Erasing Sensitive Information Through Semantic and
  Fairness-aware Perturbations
fairBERTs: Erasing Sensitive Information Through Semantic and Fairness-aware Perturbations
Jinfeng Li
YueFeng Chen
Xiangyu Liu
Longtao Huang
Rong Zhang
Hui Xue
AAML
37
0
0
11 Jul 2024
Listen and Speak Fairly: A Study on Semantic Gender Bias in Speech
  Integrated Large Language Models
Listen and Speak Fairly: A Study on Semantic Gender Bias in Speech Integrated Large Language Models
Yi-Cheng Lin
T. Lin
Chih-Kai Yang
Ke-Han Lu
Wei-Chih Chen
Chun-Yi Kuan
Hung-yi Lee
34
3
0
09 Jul 2024
Fairness and Bias in Multimodal AI: A Survey
Fairness and Bias in Multimodal AI: A Survey
Tosin P. Adewumi
Lama Alkhaled
Namrata Gurung
G. V. Boven
Irene Pagliai
58
9
0
27 Jun 2024
OATH-Frames: Characterizing Online Attitudes Towards Homelessness with
  LLM Assistants
OATH-Frames: Characterizing Online Attitudes Towards Homelessness with LLM Assistants
Jaspreet Ranjit
Brihi Joshi
Rebecca Dorn
Laura Petry
Olga Koumoundouros
Jayne Bottarini
Peichen Liu
Eric Rice
Swabha Swayamdipta
37
1
0
21 Jun 2024
Robustifying Safety-Aligned Large Language Models through Clean Data
  Curation
Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Xiaoqun Liu
Jiacheng Liang
Muchao Ye
Zhaohan Xi
AAML
53
18
0
24 May 2024
A survey on fairness of large language models in e-commerce: progress,
  application, and challenge
A survey on fairness of large language models in e-commerce: progress, application, and challenge
Qingyang Ren
Zilin Jiang
Jinghan Cao
Sijia Li
Chiqu Li
Yiyang Liu
Shuning Huo
Tiange He
Yuan Chen
AILaw
FaML
40
6
0
15 May 2024
What is Your Favorite Gender, MLM? Gender Bias Evaluation in
  Multilingual Masked Language Models
What is Your Favorite Gender, MLM? Gender Bias Evaluation in Multilingual Masked Language Models
Emily M. Bender
Solon Barocas
Robert Sim
Hanna Wallach. 2021
39
3
0
09 Apr 2024
FairPair: A Robust Evaluation of Biases in Language Models through
  Paired Perturbations
FairPair: A Robust Evaluation of Biases in Language Models through Paired Perturbations
Jane Dwivedi-Yu
Raaz Dwivedi
Timo Schick
43
2
0
09 Apr 2024
GeniL: A Multilingual Dataset on Generalizing Language
GeniL: A Multilingual Dataset on Generalizing Language
Aida Mostafazadeh Davani
S. Gubbi
Sunipa Dev
Shachi Dave
Vinodkumar Prabhakaran
46
1
0
08 Apr 2024
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness
  of LLMs as Rankers
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers
Yuan Wang
Xuyang Wu
Hsin-Tai Wu
Zhiqiang Tao
Yi Fang
ALM
39
7
0
04 Apr 2024
Will the Real Linda Please Stand up...to Large Language Models?
  Examining the Representativeness Heuristic in LLMs
Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in LLMs
Pengda Wang
Zilin Xiao
Hanjie Chen
Frederick L. Oswald
31
6
0
01 Apr 2024
A Survey on Multilingual Large Language Models: Corpora, Alignment, and
  Bias
A Survey on Multilingual Large Language Models: Corpora, Alignment, and Bias
Yuemei Xu
Ling Hu
Jiayi Zhao
Zihan Qiu
Yuqi Ye
Hanwen Gu
LRM
32
37
0
01 Apr 2024
A Roadmap Towards Automated and Regulated Robotic Systems
A Roadmap Towards Automated and Regulated Robotic Systems
Yihao Liu
Mehran Armand
50
2
0
21 Mar 2024
Evaluating LLMs for Gender Disparities in Notable Persons
Evaluating LLMs for Gender Disparities in Notable Persons
L. Rhue
Sofie Goethals
Arun Sundararajan
52
5
0
14 Mar 2024
From Fitting Participation to Forging Relationships: The Art of
  Participatory ML
From Fitting Participation to Forging Relationships: The Art of Participatory ML
Ned Cooper
Alex Zafiroglu
43
9
0
11 Mar 2024
Gender Bias in Large Language Models across Multiple Languages
Gender Bias in Large Language Models across Multiple Languages
Jinman Zhao
Yitian Ding
Chen Jia
Yining Wang
Zifan Qian
32
25
0
01 Mar 2024
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware
  Classification
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification
Garima Chhikara
Anurag Sharma
Kripabandhu Ghosh
Abhijnan Chakraborty
39
14
0
28 Feb 2024
Prejudice and Volatility: A Statistical Framework for Measuring Social
  Discrimination in Large Language Models
Prejudice and Volatility: A Statistical Framework for Measuring Social Discrimination in Large Language Models
Yiran Liu
Ke Yang
Zehan Qi
Xiao-Yang Liu
Yang Yu
U. I. Urbana-Champaign
47
1
0
23 Feb 2024
Investigating Cultural Alignment of Large Language Models
Investigating Cultural Alignment of Large Language Models
Badr AlKhamissi
Muhammad N. ElNokrashy
Mai AlKhamissi
Mona T. Diab
37
44
0
20 Feb 2024
Measuring and Reducing LLM Hallucination without Gold-Standard Answers
Measuring and Reducing LLM Hallucination without Gold-Standard Answers
Jiaheng Wei
Yuanshun Yao
Jean-François Ton
Hongyi Guo
Andrew Estornell
Yang Liu
HILM
55
18
0
16 Feb 2024
Self-Debiasing Large Language Models: Zero-Shot Recognition and
  Reduction of Stereotypes
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Tong Yu
Hanieh Deilamsalehy
Ruiyi Zhang
Sungchul Kim
Franck Dernoncourt
24
19
0
03 Feb 2024
Tradeoffs Between Alignment and Helpfulness in Language Models with
  Representation Engineering
Tradeoffs Between Alignment and Helpfulness in Language Models with Representation Engineering
Yotam Wolf
Noam Wies
Dorin Shteyman
Binyamin Rothberg
Yoav Levine
Amnon Shashua
LLMSV
31
13
0
29 Jan 2024
An investigation of structures responsible for gender bias in BERT and
  DistilBERT
An investigation of structures responsible for gender bias in BERT and DistilBERT
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Christophe Gravier
38
4
0
12 Jan 2024
Whose wife is it anyway? Assessing bias against same-gender
  relationships in machine translation
Whose wife is it anyway? Assessing bias against same-gender relationships in machine translation
Ian Stewart
Rada Mihalcea
27
4
0
10 Jan 2024
A Group Fairness Lens for Large Language Models
A Group Fairness Lens for Large Language Models
Guanqun Bi
Lei Shen
Yuqiang Xie
Yanan Cao
Tiangang Zhu
Xiao-feng He
ALM
34
4
0
24 Dec 2023
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in
  LLMs
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs
S. Kadhe
Anisa Halimi
Ambrish Rawat
Nathalie Baracaldo
MU
22
7
0
12 Dec 2023
How should the advent of large language models affect the practice of
  science?
How should the advent of large language models affect the practice of science?
Marcel Binz
Stephan Alaniz
Adina Roskies
B. Aczel
Carl T. Bergstrom
...
Emily M. Bender
M. Marelli
Matthew M. Botvinick
Zeynep Akata
Eric Schulz
39
9
0
05 Dec 2023
TIBET: Identifying and Evaluating Biases in Text-to-Image Generative
  Models
TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models
Aditya Chinchure
Pushkar Shukla
Gaurav Bhatt
Kiri Salij
K. Hosanagar
Leonid Sigal
Matthew A. Turk
21
24
0
03 Dec 2023
Explaining CLIP's performance disparities on data from blind/low vision
  users
Explaining CLIP's performance disparities on data from blind/low vision users
Daniela Massiceti
Camilla Longden
Agnieszka Slowik
Samuel Wills
Martin Grayson
C. Morrison
VLM
29
9
0
29 Nov 2023
SoUnD Framework: Analyzing (So)cial Representation in (Un)structured
  (D)ata
SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata
Mark Díaz
Sunipa Dev
Emily Reif
Remi Denton
Vinodkumar Prabhakaran
33
3
0
28 Nov 2023
Fair Text Classification with Wasserstein Independence
Fair Text Classification with Wasserstein Independence
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Rémi Emonet
Christophe Gravier
FaML
32
2
0
21 Nov 2023
Causal ATE Mitigates Unintended Bias in Controlled Text Generation
Causal ATE Mitigates Unintended Bias in Controlled Text Generation
Rahul Madhavan
Kahini Wadhawan
43
0
0
19 Nov 2023
Unmasking and Improving Data Credibility: A Study with Datasets for
  Training Harmless Language Models
Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models
Zhaowei Zhu
Jialu Wang
Hao Cheng
Yang Liu
31
16
0
19 Nov 2023
Bias A-head? Analyzing Bias in Transformer-Based Language Model
  Attention Heads
Bias A-head? Analyzing Bias in Transformer-Based Language Model Attention Heads
Yi Yang
Hanyu Duan
Ahmed Abbasi
John P. Lalor
Kar Yan Tam
19
6
0
17 Nov 2023
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language
  Models
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models
Mingfeng Xue
Dayiheng Liu
Kexin Yang
Guanting Dong
Wenqiang Lei
Zheng Yuan
Chang Zhou
Jingren Zhou
LLMAG
27
2
0
25 Oct 2023
1234
Next