ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03611
  4. Cited By
Understanding the Origins of Bias in Word Embeddings

Understanding the Origins of Bias in Word Embeddings

8 October 2018
Marc-Etienne Brunet
Colleen Alkalay-Houlihan
Ashton Anderson
R. Zemel
    FaML
ArXivPDFHTML

Papers citing "Understanding the Origins of Bias in Word Embeddings"

42 / 42 papers shown
Title
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
Brandon Smith
Mohamed Reda Bouadjenek
Tahsin Alamgir Kheya
Phillip Dawson
S. Aryal
ALM
ELM
26
0
0
14 May 2025
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key to Model Reasoning
Lefei Zhang
Lijie Hu
Di Wang
LRM
97
0
0
17 Feb 2025
Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through Books
Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through Books
Sangmitra Madhusudan
Robert D Morabito
Skye Reid
Nikta Gohari Sadr
Ali Emami
61
0
0
07 Feb 2025
Data Debugging is NP-hard for Classifiers Trained with SGD
Data Debugging is NP-hard for Classifiers Trained with SGD
Zizheng Guo
Pengyu Chen
Yanzhang Fu
Xuelong Li
28
0
0
02 Aug 2024
Label Smoothing Improves Machine Unlearning
Label Smoothing Improves Machine Unlearning
Zonglin Di
Zhaowei Zhu
Jinghan Jia
Jiancheng Liu
Zafar Takhirov
Bo Jiang
Yuanshun Yao
Sijia Liu
Yang Liu
40
2
0
11 Jun 2024
Data Quality in Edge Machine Learning: A State-of-the-Art Survey
Data Quality in Edge Machine Learning: A State-of-the-Art Survey
M. D. Belgoumri
Mohamed Reda Bouadjenek
Sunil Aryal
Hakim Hacid
44
1
0
01 Jun 2024
Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach
  for Relation Classification
Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification
Robert Vacareanu
F. Alam
M. Islam
Haris Riaz
Mihai Surdeanu
NAI
35
2
0
05 Mar 2024
ConFit: Improving Resume-Job Matching using Data Augmentation and
  Contrastive Learning
ConFit: Improving Resume-Job Matching using Data Augmentation and Contrastive Learning
Xiao Yu
Jinzhong Zhang
Zhou Yu
43
1
0
29 Jan 2024
Deeper Understanding of Black-box Predictions via Generalized Influence
  Functions
Deeper Understanding of Black-box Predictions via Generalized Influence Functions
Hyeonsu Lyu
Jonggyu Jang
Sehyun Ryu
H. Yang
TDI
AI4CE
22
5
0
09 Dec 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
60
0
20 Aug 2023
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language
  Models
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Somayeh Ghanbarzadeh
Yan-ping Huang
Hamid Palangi
R. C. Moreno
Hamed Khanpour
39
12
0
20 Jul 2023
Taught by the Internet, Exploring Bias in OpenAIs GPT3
Taught by the Internet, Exploring Bias in OpenAIs GPT3
Ali Ayaz
Aditya Nawalgaria
Ruilian Yin
23
0
0
04 Jun 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
A Survey on Preserving Fairness Guarantees in Changing Environments
A Survey on Preserving Fairness Guarantees in Changing Environments
Ainhize Barrainkua
Paula Gordaliza
Jose A. Lozano
Novi Quadrianto
FaML
29
3
0
14 Nov 2022
Influence Functions for Sequence Tagging Models
Influence Functions for Sequence Tagging Models
Sarthak Jain
Varun Manjunatha
Byron C. Wallace
A. Nenkova
TDI
35
8
0
25 Oct 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
32
3
0
14 Jul 2022
The Problem of Semantic Shift in Longitudinal Monitoring of Social
  Media: A Case Study on Mental Health During the COVID-19 Pandemic
The Problem of Semantic Shift in Longitudinal Monitoring of Social Media: A Case Study on Mental Health During the COVID-19 Pandemic
Keith Harrigian
Mark Dredze
36
5
0
22 Jun 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
22
7
0
05 May 2022
Regional Negative Bias in Word Embeddings Predicts Racial Animus--but
  only via Name Frequency
Regional Negative Bias in Word Embeddings Predicts Racial Animus--but only via Name Frequency
Austin Van Loon
Salvatore Giorgi
Robb Willer
J. Eichstaedt
42
10
0
20 Jan 2022
Scaling Up Influence Functions
Scaling Up Influence Functions
Andrea Schioppa
Polina Zablotskaia
David Vilar
Artem Sokolov
TDI
33
90
0
06 Dec 2021
Fairness in Missing Data Imputation
Fairness in Missing Data Imputation
Yiliang Zhang
Q. Long
36
12
0
22 Oct 2021
Developing a novel fair-loan-predictor through a multi-sensitive
  debiasing pipeline: DualFair
Developing a novel fair-loan-predictor through a multi-sensitive debiasing pipeline: DualFair
Ashutosh Kumar Singh
Jashandeep Singh
Ariba Khan
Amar Gupta
FaML
21
3
0
17 Oct 2021
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
89
51
0
01 Oct 2021
An unsupervised framework for tracing textual sources of moral change
An unsupervised framework for tracing textual sources of moral change
Aida Ramezani
Zining Zhu
Frank Rudzicz
Yang Xu
16
11
0
01 Sep 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
FairCanary: Rapid Continuous Explainable Fairness
FairCanary: Rapid Continuous Explainable Fairness
Avijit Ghosh
Aalok Shanbhag
Christo Wilson
11
20
0
13 Jun 2021
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
28
41
0
12 May 2021
On the Interpretability and Significance of Bias Metrics in Texts: a
  PMI-based Approach
On the Interpretability and Significance of Bias Metrics in Texts: a PMI-based Approach
Francisco Valentini
Germán Rosati
Damián E. Blasi
D. Slezak
Edgar Altszyler
22
3
0
13 Apr 2021
Probing Multimodal Embeddings for Linguistic Properties: the
  Visual-Semantic Case
Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case
Adam Dahlgren Lindström
Suna Bensch
Johanna Björklund
F. Drewes
24
20
0
22 Feb 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
41
102
0
31 Dec 2020
Cross-Loss Influence Functions to Explain Deep Network Representations
Cross-Loss Influence Functions to Explain Deep Network Representations
Andrew Silva
Rohit Chopra
Matthew C. Gombolay
TDI
21
15
0
03 Dec 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
30
156
0
28 Oct 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
23
38
0
09 Jul 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
S. Feizi
TDI
37
219
0
25 Jun 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
26
96
0
19 Jun 2020
Mitigating Gender Bias in Captioning Systems
Mitigating Gender Bias in Captioning Systems
Ruixiang Tang
Mengnan Du
Yuening Li
Zirui Liu
Na Zou
Xia Hu
FaML
16
64
0
15 Jun 2020
Algorithmic Fairness
Algorithmic Fairness
Dana Pessach
E. Shmueli
FaML
33
388
0
21 Jan 2020
Generating Interactive Worlds with Text
Generating Interactive Worlds with Text
Angela Fan
Jack Urbanek
Pratik Ringshia
Emily Dinan
Emma Qian
...
Shrimai Prabhumoye
Douwe Kiela
Tim Rocktaschel
Arthur Szlam
Jason Weston
27
27
0
20 Nov 2019
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
32
205
0
10 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
27
223
0
04 Nov 2019
Man is to Person as Woman is to Location: Measuring Gender Bias in Named
  Entity Recognition
Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Ninareh Mehrabi
Thamme Gowda
Fred Morstatter
Nanyun Peng
Aram Galstyan
15
57
0
24 Oct 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
335
4,223
0
23 Aug 2019
1