Learning from Unlabelled Data for Clinical Semantic Textual Similarity
Yuxia Wang, Karin Verspoor, Timothy Baldwin
3rd Clinical Natural Language Processing Workshop (Clinical NLP 2020) Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
Domain pretraining followed by task fine-tuning has become the standard paradigm for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we propose to utilise domain unlabelled data by assigning pseudo labels from a general model. We evaluate the approach on two clinical STS datasets, and achieve r= 0.80 on N2C2-STS. Further investigation reveals that if the data distribution of unlabelled sentence pairs is closer to the test data, we can obtain better performance. By leveraging a large general-purpose STS dataset and small-scale in-domain training data, we obtain further improvements to r= 0.90, a new SOTA.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.