Online Back-Parsing for AMR-to-Text Generation

Xuefeng Bai, Linfeng Song, Yue Zhang

Language Generation Long Paper

Gather-1I: Nov 17, Gather-1I: Nov 17 (02:00-04:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: AMR-to-text generation aims to recover a text containing the same meaning as an input AMR graph. Current research develops increasingly powerful graph encoders to better represent AMR graphs, with decoders based on standard language modeling being used to generate outputs. We propose a decoder that back predicts projected AMR graphs on the target sentence during text generation. As the result, our outputs can better preserve the input meaning than standard decoders. Experiments on two AMR benchmarks show the superiority of our model over the previous state-of-the-art system based on graph Transformer.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Zero-Shot Crosslingual Sentence Simplification
Jonathan Mallinson, Rico Sennrich, Mirella Lapata,
Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation
Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing,
Improving AMR Parsing with Sequence-to-Sequence Pre-training
Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou,
Sparse Text Generation
Pedro Henrique Martins, Zita Marinho, André F. T. Martins,