4
0

WenyanGPT: A Large Language Model for Classical Chinese Tasks

Xinyu Yao
Mengdi Wang
Bo Chen
Xiaobing Zhao
Abstract

Classical Chinese, as the core carrier of Chinese culture, plays a crucial role in the inheritance and study of ancient literature. However, existing natural language processing models primarily optimize for Modern Chinese, resulting in inadequate performance on Classical Chinese. This paper presents a comprehensive solution for Classical Chinese language processing. By continuing pre-training and instruction fine-tuning on the LLaMA3-8B-Chinese model, we construct a large language model, WenyanGPT, which is specifically designed for Classical Chinese tasks. Additionally, we develop an evaluation benchmark dataset, WenyanBENCH. Experimental results on WenyanBENCH demonstrate that WenyanGPT significantly outperforms current advanced LLMs in various Classical Chinese tasks. We make the model's training data, instruction fine-tuning data\footnote, and evaluation benchmark dataset publicly available to promote further research and development in the field of Classical Chinese processing.

View on arXiv
@article{yao2025_2504.20609,
  title={ WenyanGPT: A Large Language Model for Classical Chinese Tasks },
  author={ Xinyu Yao and Mengdi Wang and Bo Chen and Xiaobing Zhao },
  journal={arXiv preprint arXiv:2504.20609},
  year={ 2025 }
}
Comments on this paper