ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17367
30
25

Augmenting Large Language Model Translators via Translation Memories

27 May 2023
Yongyu Mu
Abudurexiti Reheman
Zhiquan Cao
Yuchun Fan
Bei Li
Yinqiao Li
Tong Xiao
Chunliang Zhang
Jingbo Zhu
    LRM
ArXivPDFHTML
Abstract

Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to ``understand'' prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.

View on arXiv
Comments on this paper