Towards Enhancing Faithfulness for Neural Machine Translation

Rongxiang Weng, Heng Yu, Xiangpeng Wei, Weihua Luo

Machine Translation and Multilinguality Long Paper

Gather-2E: Nov 17, Gather-2E: Nov 17 (10:00-12:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: Neural machine translation (NMT) has achieved great success due to the ability to generate high-quality sentences. Compared with human translations, one of the drawbacks of current NMT is that translations are not usually faithful to the input, e.g., omitting information or generating unrelated fragments, which inevitably decreases the overall quality, especially for human readers. In this paper, we propose a novel training strategy with a multi-task learning paradigm to build a faithfulness enhanced NMT model (named \textsc{FEnmt}). During the NMT training process, we sample a subset from the training set and translate them to get fragments that have been mistranslated. Afterward, the proposed multi-task learning paradigm is employed on both encoder and decoder to guide NMT to correctly translate these fragments. Both automatic and human evaluations verify that our \textsc{FEnmt} could improve translation quality by effectively reducing unfaithful translations.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Can Automatic Post-Editing Improve NMT?
Shamil Chollampatt, Raymond Hendy Susanto, Liling Tan, Ewa Szymanska,
Uncertainty-Aware Semantic Augmentation for Neural Machine Translation
Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo,
Translation Artifacts in Cross-lingual Transfer Learning
Mikel Artetxe, Gorka Labaka, Eneko Agirre,