ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.04199
75
1

Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations

5 April 2025
Zihuai Zhao
Wenqi Fan
Yao Wu
Qing Li
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have demonstrated unprecedented language understanding and reasoning capabilities to capture diverse user preferences and advance personalized recommendations. Despite the growing interest in LLM-based recommendations, unique challenges are brought to the trustworthiness of LLM-based recommender systems (LLM-RS). Compared to unique user/item representations in conventional recommender systems, users and items share the textual representation (e.g., word embeddings) in LLM-based recommendations. Recent studies have revealed that LLMs are likely to inherit stereotypes that are embedded ubiquitously in word embeddings, due to their training on large-scale uncurated datasets. This leads to LLM-RS exhibiting stereotypical linguistic associations between users and items, causing a form of two-sided (i.e., user-to-item) recommendation fairness. However, there remains a lack of studies investigating the unfairness of LLM-RS due to intrinsic stereotypes, which can simultaneously involve user and item groups. To bridge this gap, this study reveals a new variant of fairness between stereotype groups containing both users and items, to quantify discrimination against stereotypes in LLM-RS. Moreover, in this paper, to mitigate stereotype-aware unfairness in textual user and item representations, we propose a novel framework named Mixture-of-Stereotypes (MoS). In particular, an insightful stereotype-wise routing strategy over multiple stereotype-relevant experts is designed, aiming to learn unbiased representations against different stereotypes in LLM-RS. Extensive experiments are conducted to analyze the influence of stereotype-aware fairness in LLM-RS and the effectiveness of our proposed methods, which consistently outperform competitive benchmarks under various fairness settings.

View on arXiv
@article{zhao2025_2504.04199,
  title={ Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations },
  author={ Zihuai Zhao and Wenqi Fan and Yao Wu and Qing Li },
  journal={arXiv preprint arXiv:2504.04199},
  year={ 2025 }
}
Comments on this paper