35

EXAONE 4.5 Technical Report

Eunbi Choi
Kibong Choi
Sehyun Chun
Seokhee Hong
Junwon Hwang
Hyojin Jeon
Ahra Jo
Hyunjik Jo
Yeonsik Jo
Joonkee Kim
Seonghwan Kim
Soyeon Kim
Sunkyoung Kim
Yireun Kim
Yongil Kim
Changhun Lee
Haeju Lee
Jinsik Lee
Kyungmin Lee
Sangha Park
Kwangrok Ryoo
Minju Seo
Sejong Yang
Heuiyeen Yeen
Hwan Chang
Stanley Jungkyu Choi
Yejin Choi
Kyubeen Han
Joonwon Jang
Kijeong Jeon
Geunyeong Jeong
Gerrard Jeongwon Jo
Jiyeon Jung
Daeseong Kim
Dohoon Kim
Dohyun Kim
Hyunseo Kim
Minu Kim
Myoungshin Kim
Youchul Kim
Byungoh Ko
Christopher Lee
Edward Hwayoung Lee
Honglak Lee
Jiyoung Lee
Sangeun Lee
Seungwon Lim
Woohyung Lim
Jueun Mun
Jaewoo Park
Jimin Park
Jinho Park
Yongmin Park
Wooseok Seo
Yongwoo Song
Sihyuk Yi
Kyungjae Yoo
Sangyeon Yoon
Main:13 Pages
1 Figures
Bibliography:6 Pages
4 Tables
Abstract

This technical report introduces EXAONE 4.5, the first open-weight vision language model released by LG AI Research. EXAONE 4.5 is architected by integrating a dedicated visual encoder into the existing EXAONE 4.0 framework, enabling native multimodal pretraining over both visual and textual modalities. The model is trained on large-scale data with careful curation, particularly emphasizing document-centric corpora that align with LG's strategic application domains. This targeted data design enables substantial performance gains in document understanding and related tasks, while also delivering broad improvements across general language capabilities. EXAONE 4.5 extends context length up to 256K tokens, facilitating long-context reasoning and enterprise-scale use cases. Comparative evaluations demonstrate that EXAONE 4.5 achieves competitive performance in general benchmarks while outperforming state-of-the-art models of similar scale in document understanding and Korean contextual reasoning. As part of LG's ongoing effort toward practical industrial deployment, EXAONE 4.5 is designed to be continuously extended with additional domains and application scenarios to advance AI for a better life.

View on arXiv
Comments on this paper