How Far Can We Go with Data Selection? A Case Study on Semantic Sequence Tagging Tasks

Samuel Louvan, Bernardo Magnini

Workshop on Insights from Negative Results in NLP Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: Although several works have addressed the role of data selection to improve transfer learning for various NLP tasks, there is no consensus about its real benefits and, more generally, there is a lack of shared practices on how it can be best applied. We propose a systematic approach aimed at evaluating data selection in scenarios of increasing complexity. Specifically, we compare the case in which source and target tasks are the same while source and target domains are different, against the more challenging scenario where both tasks and domains are different. We run a number of experiments on semantic sequence tagging tasks, which are relatively less investigated in data selection, and conclude that data selection has more benefit on the scenario when the tasks are the same, while in case of different (although related) tasks from distant domains, a combination of data selection and multi-task learning is ineffective for most cases.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.