ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06816
21
0

How do datasets, developers, and models affect biases in a low-resourced language?

7 June 2025
Dipto Das
Shion Guha
Bryan Semaan
ArXiv (abs)PDFHTML
Main:11 Pages
3 Figures
Bibliography:5 Pages
9 Tables
Appendix:1 Pages
Abstract

Sociotechnical systems, such as language technologies, frequently exhibit identity-based biases. These biases exacerbate the experiences of historically marginalized communities and remain understudied in low-resource contexts. While models and datasets specific to a language or with multilingual support are commonly recommended to address these biases, this paper empirically tests the effectiveness of such approaches in the context of gender, religion, and nationality-based identities in Bengali, a widely spoken but low-resourced language. We conducted an algorithmic audit of sentiment analysis models built on mBERT and BanglaBERT, which were fine-tuned using all Bengali sentiment analysis (BSA) datasets from Google Dataset Search. Our analyses showed that BSA models exhibit biases across different identity categories despite having similar semantic content and structure. We also examined the inconsistencies and uncertainties arising from combining pre-trained models and datasets created by individuals from diverse demographic backgrounds. We connected these findings to the broader discussions on epistemic injustice, AI alignment, and methodological decisions in algorithmic audits.

View on arXiv
@article{das2025_2506.06816,
  title={ How do datasets, developers, and models affect biases in a low-resourced language? },
  author={ Dipto Das and Shion Guha and Bryan Semaan },
  journal={arXiv preprint arXiv:2506.06816},
  year={ 2025 }
}
Comments on this paper