OpenEthics: A Comprehensive Ethical Evaluation of Open-Source Generative Large Language Models

Generative large language models present significant potential but also raise critical ethical concerns. Most studies focus on narrow ethical dimensions, and also limited diversity of languages and models. To address these gaps, we conduct a broad ethical evaluation of 29 recent open-source large language models using a novel data collection including four ethical aspects: Robustness, reliability, safety, and fairness. We analyze model behavior in both a commonly used language, English, and a low-resource language, Turkish. Our aim is to provide a comprehensive ethical assessment and guide safer model development by filling existing gaps in evaluation breadth, language coverage, and model diversity. Our experimental results, based on LLM-as-a-Judge, reveal that optimization efforts for many open-source models appear to have prioritized safety and fairness, and demonstrated good robustness while reliability remains a concern. We demonstrate that ethical evaluation can be effectively conducted independently of the language used. In addition, models with larger parameter counts tend to exhibit better ethical performance, with Gemma and Qwen models demonstrating the most ethical behavior among those evaluated.
View on arXiv@article{çetin2025_2505.16036, title={ OpenEthics: A Comprehensive Ethical Evaluation of Open-Source Generative Large Language Models }, author={ Burak Erinç Çetin and Yıldırım Özen and Elif Naz Demiryılmaz and Kaan Engür and Cagri Toraman }, journal={arXiv preprint arXiv:2505.16036}, year={ 2025 } }