DeepPaperComposer: A Simple Solution for Training Data Preparation for Parsing Research Papers
Meng Ling, Jian Chen
First Workshop on Scholarly Document Processing (SDP 2020) Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
We present DeepPaperComposer, a simple solution for preparing highly accurate (100%) training data without manual labeling to extract content from scholarly articles using convolutional neural networks (CNNs). We used our approach to generate data and trained CNNs to extract eight categories of both textual (titles, abstracts, headers, figure and table captions, and other texts) and non-textural content (figures and tables) from 30 years of IEEE VIS conference papers, of which a third were scanned bitmap PDFs. We curated this dataset and named it VISpaper-3K. We then showed our initial benchmark performance using VISpaper-3K over itself and CS-150 using YOLOv3 and Faster-RCNN. We open-source DeepPaperComposer of our training data generation and released the resulting annotation data VISpaper-3K to promote re-producible research.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.