2
0

DMDTEval: An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation

Zhibo Man
Yuanmeng Chen
Yujie Zhang
Yufeng Chen
Jinan Xu
Abstract

Currently, Large Language Models (LLMs) have achieved remarkable results in machine translation. However, their performance in multi-domain translation (MDT) is less satisfactory; the meanings of words can vary across different domains, highlighting the significant ambiguity inherent in MDT. Therefore, evaluating the disambiguation ability of LLMs in MDT remains an open problem. To this end, we present an evaluation and analysis of LLMs on disambiguation in multi-domain translation (DMDTEval), our systematic evaluation framework consisting of three critical aspects: (1) we construct a translation test set with multi-domain ambiguous word annotation, (2) we curate a diverse set of disambiguation prompting templates, and (3) we design precise disambiguation metrics, and study the efficacy of various prompting strategies on multiple state-of-the-art LLMs. Our extensive experiments reveal a number of crucial findings that we believe will pave the way and also facilitate further research in the critical area of improving the disambiguation of LLMs.

View on arXiv
@article{man2025_2504.20371,
  title={ DMDTEval: An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation },
  author={ Zhibo Man and Yuanmeng Chen and Yujie Zhang and Yufeng Chen and Jinan Xu },
  journal={arXiv preprint arXiv:2504.20371},
  year={ 2025 }
}
Comments on this paper