ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.05986
31
377

CLUE: A Chinese Language Understanding Evaluation Benchmark

13 April 2020
Liang Xu
Hai Hu
Xuanwei Zhang
Lu Li
Chenjie Cao
Yudong Li
Yechen Xu
Kai Sun
Dian Yu
Cong Yu
Yin Tian
Qianqian Dong
Weitang Liu
Bo Shi
Yiming Cui
Junyi Li
Jun-jie Zeng
Rongzhao Wang
Weijian Xie
Yanting Li
Yina Patterson
Zuoyu Tian
Yiwen Zhang
He Zhou
Shaoweihua Liu
Zhe Zhao
Qipeng Zhao
Cong Yue
Xinrui Zhang
Zhen-Yi Yang
Kyle Richardson
Zhenzhong Lan
    ELM
ArXivPDFHTML
Abstract

The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks. These comprehensive benchmarks have facilitated a broad range of research and applications in natural language processing (NLP). The problem, however, is that most such benchmarks are limited to English, which has made it difficult to replicate many of the successes in English NLU for other languages. To help remedy this issue, we introduce the first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text. To establish results on these tasks, we report scores using an exhaustive set of current state-of-the-art pre-trained Chinese models (9 in total). We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on Chinese NLU. Our benchmark is released at https://www.CLUEbenchmarks.com

View on arXiv
Comments on this paper