Towards Multimodal Simultaneous Neural Machine Translation

Aizhan Imankulova, Masahiro Kaneko, Tosho Hirasawa, Mamoru Komachi

Fifth Conference on Machine Translation (WMT20) Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: Simultaneous translation involves translating a sentence before the speaker's utterance is completed in order to realize real-time understanding in multiple languages. This task is significantly more challenging than the general full sentence translation because of the shortage of input information during decoding. To alleviate this shortage, we propose multimodal simultaneous neural machine translation (MSNMT), which leverages visual information as an additional modality. Our experiments with the Multi30k dataset showed that MSNMT significantly outperforms its text-only counterpart in more timely translation situations with low latency. Furthermore, we verified the importance of visual information during decoding by performing an adversarial evaluation of MSNMT, where we studied how models behaved with incongruent input modality and analyzed the effect of different word order between source and target languages.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.