NLP-PINGAN-TECH @ CL-SciSumm 2020

Ling Chai, Guizhen Fu, Yuan Ni

First Workshop on Scholarly Document Processing (SDP 2020) Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: We focus on systems for TASK1 (TASK 1A and TASK 1B) of CL-SciSumm Shared Task 2020 in this paper. Task 1A is regarded as a binary classification task of sentence pairs. The strategies of domain-specific embedding and special tokens based on language models are proposed. Fusion of contextualized embedding and extra information is further explored in this article. We leverage Sembert to capture the structured semantic information. The joint of BERT-based model and classifiers without neural networks is also exploited. For the Task 1B, a language model with different weights for classes is fine-tuned to accomplish a multi-label classification task. The results show that extra information can improve the identification of cited text spans. The end-to-end trained models outperform models trained with two stages, and the averaged prediction of multi-models is more accurate than an individual one.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.