ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.10408
  4. Cited By
On User Interfaces for Large-Scale Document-Level Human Evaluation of
  Machine Translation Outputs

On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs

21 April 2021
Roman Grundkiewicz
Marcin Junczys-Dowmunt
C. Federmann
Tom Kocmi
ArXiv (abs)PDFHTML

Papers citing "On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs"

5 / 5 papers shown
Title
Reassessing Claims of Human Parity and Super-Human Performance in
  Machine Translation at WMT 2019
Reassessing Claims of Human Parity and Super-Human Performance in Machine Translation at WMT 2019
Antonio Toral
66
43
0
12 May 2020
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural
  Machine Translation
Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation
Antonio Toral
Sheila Castilho
Ke Hu
Andy Way
65
190
0
30 Aug 2018
Has Machine Translation Achieved Human Parity? A Case for Document-level
  Evaluation
Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation
Samuel Läubli
Rico Sennrich
M. Volk
47
259
0
21 Aug 2018
Efficient Online Scalar Annotation with Bounded Support
Efficient Online Scalar Annotation with Bounded Support
Keisuke Sakaguchi
Benjamin Van Durme
48
45
0
04 Jun 2018
RankME: Reliable Human Ratings for Natural Language Generation
RankME: Reliable Human Ratings for Natural Language Generation
Jekaterina Novikova
Ondrej Dusek
Verena Rieser
ALM
48
109
0
15 Mar 2018
1