ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12553
32
0

ELAB: Extensive LLM Alignment Benchmark in Persian Language

17 April 2025
Zahra Pourbahman
Fatemeh Rajabi
Mohammadhossein Sadeghi
Omid Ghahroodi
Somaye Bakhshaei
Arash Amini
Reza Kazemi
M. Baghshah
ArXivPDFHTML
Abstract

This paper presents a comprehensive evaluation framework for aligning Persian Large Language Models (LLMs) with critical ethical dimensions, including safety, fairness, and social norms. It addresses the gaps in existing LLM evaluation frameworks by adapting them to Persian linguistic and cultural contexts. This benchmark creates three types of Persian-language benchmarks: (i) translated data, (ii) new data generated synthetically, and (iii) new naturally collected data. We translate Anthropic Red Teaming data, AdvBench, HarmBench, and DecodingTrust into Persian. Furthermore, we create ProhibiBench-fa, SafeBench-fa, FairBench-fa, and SocialBench-fa as new datasets to address harmful and prohibited content in indigenous culture. Moreover, we collect extensive dataset as GuardBench-fa to consider Persian cultural norms. By combining these datasets, our work establishes a unified framework for evaluating Persian LLMs, offering a new approach to culturally grounded alignment evaluation. A systematic evaluation of Persian LLMs is performed across the three alignment aspects: safety (avoiding harmful content), fairness (mitigating biases), and social norms (adhering to culturally accepted behaviors). We present a publicly available leaderboard that benchmarks Persian LLMs with respect to safety, fairness, and social norms at:this https URL.

View on arXiv
@article{pourbahman2025_2504.12553,
  title={ ELAB: Extensive LLM Alignment Benchmark in Persian Language },
  author={ Zahra Pourbahman and Fatemeh Rajabi and Mohammadhossein Sadeghi and Omid Ghahroodi and Somaye Bakhshaei and Arash Amini and Reza Kazemi and Mahdieh Soleymani Baghshah },
  journal={arXiv preprint arXiv:2504.12553},
  year={ 2025 }
}
Comments on this paper