Improving QA Generalization by Concurrent Modeling of Multiple Biases
Mingzhu Wu, Nafise Sadat Moosavi, Andreas Rücklé, Iryna Gurevych
SustaiNLP: Workshop on Simple and Efficient Natural Language Processing Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
Existing NLP datasets contain various biases that models can easily exploit to achieve high performances on the corresponding evaluation sets. However, focusing on dataset-specific biases limits their ability to learn more generalizable knowledge about the task from more general data patterns. In this paper, we investigate the impact of debiasing methods for improving generalization and propose a general framework for improving the performance on both in-domain and out-of-domain datasets by concurrent modeling of multiple biases in the training data. Our framework weights each example based on the biases it contains and the strength of those biases in the training data. It then uses these weights in the training objective so that the model relies less on examples with high bias weights. We extensively evaluate our framework on extractive question answering with training data from various domains with multiple biases of different strengths. We perform the evaluations in two different settings, in which the model is trained on a single domain or multiple domains simultaneously, and show its effectiveness in both settings compared to state-of-the-art debiasing methods.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.