ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06356
24
0

Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA

9 May 2025
Karthik Reddy Kanjula
Surya Guthikonda
Nahid Alam
Shayekh Bin Islam
ArXivPDFHTML
Abstract

Pretraining datasets are foundational to the development of multimodal models, yet they often have inherent biases and toxic content from the web-scale corpora they are sourced from. In this paper, we investigate the prevalence of toxicity in LLaVA image-text pretraining dataset, examining how harmful content manifests in different modalities. We present a comprehensive analysis of common toxicity categories and propose targeted mitigation strategies, resulting in the creation of a refined toxicity-mitigated dataset. This dataset removes 7,531 of toxic image-text pairs in the LLaVA pre-training dataset. We offer guidelines for implementing robust toxicity detection pipelines. Our findings underscore the need to actively identify and filter toxic content - such as hate speech, explicit imagery, and targeted harassment - to build more responsible and equitable multimodal systems. The toxicity-mitigated dataset is open source and is available for further research.

View on arXiv
@article{kanjula2025_2505.06356,
  title={ Understanding and Mitigating Toxicity in Image-Text Pretraining Datasets: A Case Study on LLaVA },
  author={ Karthik Reddy Kanjula and Surya Guthikonda and Nahid Alam and Shayekh Bin Islam },
  journal={arXiv preprint arXiv:2505.06356},
  year={ 2025 }
}
Comments on this paper