A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining

Chenguang Zhu, Ruochen Xu, Michael Zeng, Xuedong Huang

1st Workshop on Computational Approaches to Discourse Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization intractable. Meanwhile, there are a handful of deep neural models for text summarization and dialogue systems. However, the semantic structure and styles of meeting transcripts are quite different from articles and conversations. In this paper, we propose a novel abstractive summary network that adapts to the meeting scenario. We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers. Furthermore, due to the inadequacy of meeting summary data, we pretrain the model on large-scale news summary data. Empirical results show that our model outperforms previous approaches in both automatic metrics and human evaluation. For example, on ICSI dataset, the ROUGE-1 score increases from 34.66% to 46.28%.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.