ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11582
26
0

AskQE: Question Answering as Automatic Evaluation for Machine Translation

15 April 2025
Dayeon Ki
Kevin Duh
Marine Carpuat
ArXivPDFHTML
Abstract

How can a monolingual English speaker determine whether an automatic translation in French is good enough to be shared? Existing MT error detection and quality estimation (QE) techniques do not address this practical scenario. We introduce AskQE, a question generation and answering framework designed to detect critical MT errors and provide actionable feedback, helping users decide whether to accept or reject MT outputs even without the knowledge of the target language. Using ContraTICO, a dataset of contrastive synthetic MT errors in the COVID-19 domain, we explore design choices for AskQE and develop an optimized version relying on LLaMA-3 70B and entailed facts to guide question generation. We evaluate the resulting system on the BioMQM dataset of naturally occurring MT errors, where AskQE has higher Kendall's Tau correlation and decision accuracy with human ratings compared to other QE metrics.

View on arXiv
@article{ki2025_2504.11582,
  title={ AskQE: Question Answering as Automatic Evaluation for Machine Translation },
  author={ Dayeon Ki and Kevin Duh and Marine Carpuat },
  journal={arXiv preprint arXiv:2504.11582},
  year={ 2025 }
}
Comments on this paper