Towards Enhancing Faithfulness for Neural Machine Translation
Rongxiang Weng, Heng Yu, Xiangpeng Wei, Weihua Luo
Machine Translation and Multilinguality Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
Neural machine translation (NMT) has achieved great success due to the ability to generate high-quality sentences. Compared with human translations, one of the drawbacks of current NMT is that translations are not usually faithful to the input, e.g., omitting information or generating unrelated fragments, which inevitably decreases the overall quality, especially for human readers. In this paper, we propose a novel training strategy with a multi-task learning paradigm to build a faithfulness enhanced NMT model (named \textsc{FEnmt}). During the NMT training process, we sample a subset from the training set and translate them to get fragments that have been mistranslated. Afterward, the proposed multi-task learning paradigm is employed on both encoder and decoder to guide NMT to correctly translate these fragments. Both automatic and human evaluations verify that our \textsc{FEnmt} could improve translation quality by effectively reducing unfaithful translations.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.