Combining Self-Training and Self-Supervised Learning for Unsupervised Disfluency Detection
Shaolei Wang, Zhongyuan Wang, Wanxiang Che, Ting Liu
Speech and Multimodality Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
Most existing approaches to disfluency detection heavily rely on human-annotated corpora, which is expensive to obtain in practice. There have been several proposals to alleviate this issue with, for instance, self-supervised learning techniques, but they still require human-annotated corpora. In this work, we explore the unsupervised learning paradigm which can potentially work with unlabeled text corpora that are cheaper and easier to obtain. Our model builds upon the recent work on Noisy Student Training, a semi-supervised learning approach that extends the idea of self-training. Experimental results on the commonly used English Switchboard test set show that our approach achieves competitive performance compared to the previous state-of-the-art supervised systems using contextualized word embeddings (e.g. BERT and ELECTRA).
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.