ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.12232
  4. Cited By
"You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of
  Large Language Models in Employment Recommendations

"You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations

18 June 2024
H. Nghiem
John J. Prindle
Jieyu Zhao
Hal Daumé III
ArXivPDFHTML

Papers citing ""You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations"

11 / 11 papers shown
Title
From Structured Prompts to Open Narratives: Measuring Gender Bias in LLMs Through Open-Ended Storytelling
From Structured Prompts to Open Narratives: Measuring Gender Bias in LLMs Through Open-Ended Storytelling
Evan Chen
Run-Jun Zhan
Yan-Bai Lin
Hung-Hsuan Chen
46
0
0
20 Mar 2025
On the Mutual Influence of Gender and Occupation in LLM Representations
Haozhe An
Connor Baumler
Abhilasha Sancheti
Rachel Rudinger
AI4CE
55
0
0
09 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groups
Yu Hou
Hal Daumé III
Rachel Rudinger
39
2
0
02 Mar 2025
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Melissa Kazemi Rad
Huy Nghiem
Andy Luo
Sahil Wadhwa
Mohammad Sorower
Stephen Rawls
AAML
93
2
0
22 Jan 2025
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts
Preethi Seshadri
Seraphina Goldfarb-Tarrant
35
0
0
08 Jan 2025
Natural Language Processing for Human Resources: A Survey
Natural Language Processing for Human Resources: A Survey
Naoki Otani
Nikita Bhutani
Estevam R. Hruschka
VLM
37
0
0
21 Oct 2024
Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech
  Large Language Models
Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models
Yi-Cheng Lin
Wei-Chih Chen
Hung-yi Lee
38
1
0
14 Aug 2024
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta
Vaishnavi Shrivastava
A. Deshpande
A. Kalyan
Peter Clark
Ashish Sabharwal
Tushar Khot
128
101
0
08 Nov 2023
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
322
4,077
0
24 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
40
56
0
15 May 2022
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
323
4,212
0
23 Aug 2019
1