Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering

Harsh Jhamtani, Peter Clark

Question Answering Long Paper

Zoom-1C: Nov 16, Zoom-1C: Nov 16 (16:00-17:00 UTC) [Join Zoom Meeting]

Abstract: Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To address this, we introduce three explanation datasets in which explanations formed from corpus facts are annotated. Our first dataset, eQASC contains over 98K explanation annotations for the multihop question answering dataset QASC, and is the first that annotates multiple candidate explanations for each answer. The second dataset eQASC-perturbed is constructed by crowd-sourcing perturbations (while preserving their validity) of a subset of explanations in QASC, to test consistency and generalization of explanation prediction models. The third dataset eOBQA is constructed by adding explanation annotations to the OBQA dataset to test generalization of models trained on eQASC. We show that this data can be used to significantly improve explanation quality (+14% absolute F1 over a strong retrieval baseline) using a BERT-based classifier, but still behind the upper bound, offering a new challenge for future research. We also explore a delexicalized chain representation in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains (for example: "X is a Y" AND "Y has Z" IMPLIES "X has Z"). We find that generalized chains maintain performance while also being more robust to certain perturbations.\footnote{Code and datasets can be found at https://allenai.org/data/eqasc.

Connected Papers in EMNLP2020

Similar Papers

Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal,
ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
Michael Boratko, Xiang Li, Tim O'Gorman, Rajarshi Das, Dan Le, Andrew McCallum,