286
v1v2 (latest)

K-EXAONE Technical Report

Eunbi Choi
Kibong Choi
Seokhee Hong
Junwon Hwang
Hyojin Jeon
Hyunjik Jo
Joonkee Kim
Seonghwan Kim
Soyeon Kim
Sunkyoung Kim
Yireun Kim
Yongil Kim
Haeju Lee
Jinsik Lee
Kyungmin Lee
Sangha Park
Heuiyeen Yeen
Hwan Chang
Stanley Jungkyu Choi
Yejin Choi
Jiwon Ham
Kijeong Jeon
Geunyeong Jeong
Gerrard Jeongwon Jo
Yonghwan Jo
Jiyeon Jung
Naeun Kang
Dohoon Kim
Euisoon Kim
Hayeon Kim
Hyosang Kim
Hyunseo Kim
Jieun Kim
Minu Kim
Myoungshin Kim
Unsol Kim
Youchul Kim
YoungJin Kim
Chaeeun Lee
Chaeyoon Lee
Changhun Lee
Dahm Lee
Edward Hwayoung Lee
Honglak Lee
Jinsang Lee
Jiyoung Lee
Sangeun Lee
Seungwon Lim
Solji Lim
Woohyung Lim
Chanwoo Moon
Jaewoo Park
Jinho Park
Yongmin Park
Hyerin Seo
Wooseok Seo
Yongwoo Song
Sejong Yang
Sihoon Yang
Chang En Yea
Sihyuk Yi
Chansik Yoon
Dongkeun Yoon
Sangyeon Yoon
Hyeongu Yun
Main:22 Pages
12 Figures
Bibliography:3 Pages
9 Tables
Appendix:4 Pages
Abstract

This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.

View on arXiv
Comments on this paper