Fine-Grained Reward Optimization for Machine Translation using Error Severity Mappings

Reinforcement learning (RL) has been proven to be an effective and robust method for training neural machine translation systems, especially when paired with powerful reward models that accurately assess translation quality. However, most research has focused on RL methods that use sentence-level feedback, leading to inefficient learning signals due to the reward sparsity problem -- the model receives a single score for the entire sentence. To address this, we propose a novel approach that leverages fine-grained, token-level quality assessments along with error severity levels using RL methods. Specifically, we use xCOMET, a state-of-the-art quality estimation system, as our token-level reward model. We conduct experiments on small and large translation datasets with standard encoder-decoder and large language models-based machine translation systems, comparing the impact of sentence-level versus fine-grained reward signals on translation quality. Our results show that training with token-level rewards improves translation quality across language pairs over baselines according to both automatic and human evaluation. Furthermore, token-level reward optimization improves training stability, evidenced by a steady increase in mean rewards over training epochs.
View on arXiv@article{ramos2025_2411.05986, title={ Fine-Grained Reward Optimization for Machine Translation using Error Severity Mappings }, author={ Miguel Moura Ramos and Tomás Almeida and Daniel Vareta and Filipe Azevedo and Sweta Agrawal and Patrick Fernandes and André F. T. Martins }, journal={arXiv preprint arXiv:2411.05986}, year={ 2025 } }