Revealing the Myth of Higher-Order Inference in Coreference Resolution

Liyan Xu, Jinho D. Choi

Discourse and Pragmatics Short Paper

Gather-5H: Nov 18, Gather-5H: Nov 18 (18:00-20:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: This paper analyzes the impact of higher-order inference (HOI) on the task of coreference resolution. HOI has been adapted by almost all recent coreference resolution models without taking much investigation on its true effectiveness over representation learning. To make a comprehensive analysis, we implement an end-to-end coreference system as well as four HOI approaches, attended antecedent, entity equalization, span clustering, and cluster merging, where the latter two are our original methods. We find that given a high-performing encoder such as SpanBERT, the impact of HOI is negative to marginal, providing a new perspective of HOI to this task. Our best model using cluster merging shows the Avg-F1 of 80.2 on the CoNLL 2012 shared task dataset in English.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Incremental Neural Coreference Resolution in Constant Memory
Patrick Xia, João Sedoc, Benjamin Van Durme,
Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Shubham Toshniwal, Sam Wiseman, Allyson Ettinger, Karen Livescu, Kevin Gimpel,
Does the Objective Matter? Comparing Training Objectives for Pronoun Resolution
Yordan Yordanov, Oana-Maria Camburu, Vid Kocijan, Thomas Lukasiewicz,