Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.09183
Cited By
RoMe: A Robust Metric for Evaluating Natural Language Generation
17 March 2022
Md. Rony
Liubov Kovriguina
Debanjan Chaudhuri
Ricardo Usbeck
Jens Lehmann
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RoMe: A Robust Metric for Evaluating Natural Language Generation"
8 / 8 papers shown
Title
BiasAsker: Measuring the Bias in Conversational AI System
Yuxuan Wan
Wenxuan Wang
Pinjia He
Jiazhen Gu
Haonan Bai
Michael Lyu
29
67
0
21 May 2023
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis
Qingyu Lu
Liang Ding
Liping Xie
Kanjian Zhang
Derek F. Wong
Dacheng Tao
ELM
ALM
36
14
0
20 Dec 2022
Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?
Doan Nam Long Vu
N. Moosavi
Steffen Eger
26
9
0
06 Sep 2022
MENLI: Robust Evaluation Metrics from Natural Language Inference
Yanran Chen
Steffen Eger
32
16
0
15 Aug 2022
Reproducibility Issues for BERT-based Evaluation Metrics
Yanran Chen
Jonas Belouadi
Steffen Eger
38
16
0
30 Mar 2022
USCORE: An Effective Approach to Fully Unsupervised Evaluation Metrics for Machine Translation
Jonas Belouadi
Steffen Eger
33
20
0
21 Feb 2022
Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text
Sebastian Gehrmann
Elizabeth Clark
Thibault Sellam
ELM
AI4CE
69
184
0
14 Feb 2022
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
Peng Qi
Yuhao Zhang
Yuhui Zhang
Jason Bolton
Christopher D. Manning
AI4TS
213
1,656
0
16 Mar 2020
1