Using Pre-Trained Transformer for Better Lay Summarization
Seungwon Kim
First Workshop on Scholarly Document Processing (SDP 2020) Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
In this paper, we tack lay summarization tasks, which aim to automatically produce lay summaries for scientific papers, to participate in the first CL-LaySumm 2020 in SDP workshop at EMNLP 2020. We present our approach of using Pre-training with Extracted Gap-sentences for Abstractive Summarization (PEGASUS; Zhang et al., 2019b) to produce the lay summary and combining those with the extractive summarization model using Bidirectional Encoder Representations from Transformers (BERT; Devlin et al., 2018) and readability metrics that measure the readability of the sentence to further improve the quality of the summary. Our model achieves a remarkable performance on ROUGE metrics, demonstrating the produced summary is more readable while it summarizes the main points of the document.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.