What do we expect from Multiple-choice QA Systems?
Krunal Shah, Nitish Gupta, Dan Roth
Workshop on Insights from Negative Results in NLP Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
The recent success of machine learning systems on various QA datasets could be interpreted as a significant improvement in models’ language understanding abilities. However, using various perturbations, multiple recent works have shown that good performance on a dataset might not indicate performance that correlates well with human’s expectations from models that “understand” language. In this work we consider a top performing model on several Multiple Choice Question Answering (MCQA) datasets, and evaluate it against a set of expectations one might have from such a model, using a series of zero-information perturbations of the model’s inputs. Our results show that the model clearly falls short of our expectations, and motivates a modified training approach that forces the model to better attend to the inputs. We show that the new training paradigm leads to a model that performs on par with the original model while better satisfying our expectations.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.