ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.03541
52
10

EXAONE 3.0 7.8B Instruction Tuned Language Model

7 August 2024
LG AI Research
:
Soyoung An
Kyunghoon Bae
Eunbi Choi
Stanley Jungkyu Choi
Yemuk Choi
Seokhee Hong
Yeonjung Hong
Junwon Hwang
Hyojin Jeon
Gerrard Jeongwon Jo
Hyunjik Jo
Jiyeon Jung
Yountae Jung
Euisoon Kim
Hyosang Kim
Joonkee Kim
Seonghwan Kim
Soyeon Kim
Sunkyoung Kim
Yireun Kim
Youchul Kim
Edward Hwayoung Lee
Haeju Lee
Honglak Lee
Jinsik Lee
Kyungmin Lee
Moontae Lee
Seungjun Lee
Woohyung Lim
Sangha Park
Sooyoun Park
Yongmin Park
Boseong Seo
Sihoon Yang
Heuiyeen Yeen
Kyungjae Yoo
Hyeongu Yun
    ELM
    ALM
ArXivPDFHTML
Abstract

We introduce EXAONE 3.0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research. Among different model sizes, we publicly release the 7.8B instruction-tuned model to promote open research and innovations. Through extensive evaluations across a wide range of public and in-house benchmarks, EXAONE 3.0 demonstrates highly competitive real-world performance with instruction-following capability against other state-of-the-art open models of similar size. Our comparative analysis shows that EXAONE 3.0 excels particularly in Korean, while achieving compelling performance across general tasks and complex reasoning. With its strong real-world effectiveness and bilingual proficiency, we hope that EXAONE keeps contributing to advancements in Expert AI. Our EXAONE 3.0 instruction-tuned model is available at https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

View on arXiv
Comments on this paper