221

Exploring the traditional NMT model and Large Language Model for chat translation

Conference on Machine Translation (WMT), 2024
Jinlong Yang
Jiaxin Guo
Zongyao Li
Shaojun Li
Yuhao Xie
Jiawei Zheng
Bin Wei
Hao Yang
Main:4 Pages
Bibliography:3 Pages
6 Tables
Abstract

This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on English\leftrightarrowGermany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation.

View on arXiv
Comments on this paper