ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.19474
23
0

FairPIVARA: Reducing and Assessing Biases in CLIP-Based Multimodal Models

28 September 2024
Diego A. B. Moreira
Alef Iury Ferreira
Jhessica Silva
G. O. D. Santos
Luiz Pereira
João Medrado Gondim
Gustavo Bonil
H. Maia
Nádia Da Silva
Simone Tiemi Hashiguti
Jefersson A. dos Santos
Hélio Pedrini
Sandra Avila
    VLM
ArXivPDFHTML
Abstract

Despite significant advancements and pervasive use of vision-language models, a paucity of studies has addressed their ethical implications. These models typically require extensive training data, often from hastily reviewed text and image datasets, leading to highly imbalanced datasets and ethical concerns. Additionally, models initially trained in English are frequently fine-tuned for other languages, such as the CLIP model, which can be expanded with more data to enhance capabilities but can add new biases. The CAPIVARA, a CLIP-based model adapted to Portuguese, has shown strong performance in zero-shot tasks. In this paper, we evaluate four different types of discriminatory practices within visual-language models and introduce FairPIVARA, a method to reduce them by removing the most affected dimensions of feature embeddings. The application of FairPIVARA has led to a significant reduction of up to 98% in observed biases while promoting a more balanced word distribution within the model. Our model and code are available at: https://github.com/hiaac-nlp/FairPIVARA.

View on arXiv
Comments on this paper