ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11611
105
0

Identifying Gender Stereotypes and Biases in Automated Translation from English to Italian using Similarity Networks

17 February 2025
Fatemeh Mohammadi
Marta Annamaria Tamborini
Paolo Ceravolo
Costanza Nardocci
S. Maghool
ArXiv (abs)PDFHTML
Abstract

This paper is a collaborative effort between Linguistics, Law, and Computer Science to evaluate stereotypes and biases in automated translation systems. We advocate gender-neutral translation as a means to promote gender inclusion and improve the objectivity of machine translation. Our approach focuses on identifying gender bias in English-to-Italian translations. First, we define gender bias following human rights law and linguistics literature. Then we proceed by identifying gender-specific terms such as she/lei and he/lui as key elements. We then evaluate the cosine similarity between these target terms and others in the dataset to reveal the model's perception of semantic relations. Using numerical features, we effectively evaluate the intensity and direction of the bias. Our findings provide tangible insights for developing and training gender-neutral translation algorithms.

View on arXiv
@article{mohammadi2025_2502.11611,
  title={ Identifying Gender Stereotypes and Biases in Automated Translation from English to Italian using Similarity Networks },
  author={ Fatemeh Mohammadi and Marta Annamaria Tamborini and Paolo Ceravolo and Costanza Nardocci and Samira Maghool },
  journal={arXiv preprint arXiv:2502.11611},
  year={ 2025 }
}
Comments on this paper