Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2503.04232
Cited By
Tgea: An error-annotated dataset and benchmark tasks for text generation from pretrained language models
6 March 2025
Jie He
Bo Peng
Yi-Lun Liao
Qun Liu
Deyi Xiong
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Tgea: An error-annotated dataset and benchmark tasks for text generation from pretrained language models"
6 / 6 papers shown
Title
OpenEval: Benchmarking Chinese LLMs across Capability, Alignment and Safety
Chuang Liu
Linhao Yu
Jiaxuan Li
Renren Jin
Yufei Huang
...
Tao Liu
Jinwang Song
Hongying Zan
Sun Li
Deyi Xiong
ELM
40
7
0
18 Mar 2024
Real or Fake Text?: Investigating Human Ability to Detect Boundaries Between Human-Written and Machine-Generated Text
Liam Dugan
Daphne Ippolito
Arun Kirubarajan
Sherry Shi
Chris Callison-Burch
DeLMO
29
62
0
24 Dec 2022
RuCoLA: Russian Corpus of Linguistic Acceptability
Vladislav Mikhailov
T. Shamardina
Max Ryabinin
A. Pestova
I. Smurov
Ekaterina Artemova
30
28
0
23 Oct 2022
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
258
1,433
0
22 Aug 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
263
622
0
04 Dec 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1