Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?
Zhengzhong Liang, Mihai Surdeanu
Workshop on Insights from Negative Results in NLP Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
Large pretrained language models (LM) have been used successfully for multi-hop question answering. However, most of these directions are not interpretable, as they do not make the inference hops necessary to explain a candidate answer explicitly. In this work, we investigate the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements. Our analysis shows that such LMs can generate new statements for some simple inference types, but performance remains poor for complex, real-world inference types such as those that require monotonicity, composition, and commonsense knowledge.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.