16
0

A Comparative Benchmark of a Moroccan Darija Toxicity Detection Model (Typica.ai) and Major LLM-Based Moderation APIs (OpenAI, Mistral, Anthropic)

Abstract

This paper presents a comparative benchmark evaluating the performance ofthis http URL's custom Moroccan Darija toxicity detection model against major LLM-based moderation APIs: OpenAI (omni-moderation-latest), Mistral (mistral-moderation-latest), and Anthropic Claude (claude-3-haiku-20240307). We focus on culturally grounded toxic content, including implicit insults, sarcasm, and culturally specific aggression often overlooked by general-purpose systems. Using a balanced test set derived from the OMCD_Typica.ai_Mix dataset, we report precision, recall, F1-score, and accuracy, offering insights into challenges and opportunities for moderation in underrepresented languages. Our results highlightthis http URL's superior performance, underlining the importance of culturally adapted models for reliable content moderation.

View on arXiv
@article{assoudi2025_2505.04640,
  title={ A Comparative Benchmark of a Moroccan Darija Toxicity Detection Model (Typica.ai) and Major LLM-Based Moderation APIs (OpenAI, Mistral, Anthropic) },
  author={ Hicham Assoudi },
  journal={arXiv preprint arXiv:2505.04640},
  year={ 2025 }
}
Comments on this paper