Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders
Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O'Gorman, Mohit Iyyer, Andrew McCallum
Syntax: Tagging, Chunking, and Parsing Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*. In this paper, we discover that while DIORA exhaustively encodes all possible binary trees of a sentence with a soft dynamic program, its vector averaging approach is locally greedy and cannot recover from errors when computing the highest scoring parse tree in bottom-up chart parsing. To fix this issue, we introduce S-DIORA, an improved variant of DIORA that encodes a single tree rather than a softly-weighted mixture of trees by employing a hard argmax operation and a beam at each cell in the chart. Our experiments show that through *fine-tuning* a pre-trained DIORA with our new algorithm, we improve the state of the art in *unsupervised* constituency parsing on the English WSJ Penn Treebank by 2.2-6% F1, depending on the data used for fine-tuning.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.
Connected Papers in EMNLP2020
Similar Papers
Please Mind the Root: Decoding Arborescences for Dependency Parsing
Ran Zmigrod, Tim Vieira, Ryan Cotterell,

Training for Gibbs Sampling on Conditional Random Fields with Neural Scoring Factors
Sida Gao, Matthew R. Gormley,

On the Role of Supervision in Unsupervised Constituency Parsing
Haoyue Shi, Karen Livescu, Kevin Gimpel,

Improving AMR Parsing with Sequence-to-Sequence Pre-training
Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou,
