129
v1v2 (latest)

TurkBench: A Benchmark for Evaluating Turkish Large Language Models

Çağrı Toraman
Ahmet Kaan Sever
Ayse Aysu Cengiz
Elif Ecem Arslan
Görkem Sevinç
Mete Mert Birdal
Yusuf Faruk Güldemir
Ali Buğra Kanburoğlu
Sezen Felekoğlu
Osman Gürlek
Sarp Kantar
Birsen Şahin Kütük
Büşra Tufan
Elif Genç
Serkan Coşkun
Gupse Ekin Demir
Muhammed Emin Arayıcı
Olgun Dursun
Onur Gungor
Susan Üsküdarlı
Abdullah Topraksoy
Esra Darıcı
Main:8 Pages
Bibliography:3 Pages
6 Tables
Appendix:18 Pages
Abstract

With the recent surge in the development of large language models, the need for comprehensive and language-specific evaluation benchmarks has become critical. While significant progress has been made in evaluating English-language models, benchmarks for other languages, particularly those with unique linguistic characteristics such as Turkish, remain less developed. Our study introduces TurkBench, a comprehensive benchmark designed to assess the capabilities of generative large language models in the Turkish language. TurkBench involves 8,151 data samples across 21 distinct subtasks. These are organized under six main categories of evaluation: Knowledge, Language Understanding, Reasoning, Content Moderation, Turkish Grammar and Vocabulary, and Instruction Following. The diverse range of tasks and the culturally relevant data would provide researchers and developers with a valuable tool for evaluating their models and identifying areas for improvement. We further publish our benchmark for online submissions atthis https URL

View on arXiv
Comments on this paper