Counterfactual Augmentation for Training Next Response Selection
Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang
SustaiNLP: Workshop on Simple and Efficient Natural Language Processing Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
This paper studies label augmentation for training dialogue response selection. The existing model is trained by “observational” annotation, where one observed response is annotated as gold. In this paper, we propose “counterfactual augmentation” of pseudo-positive labels. We validate that the effectiveness of augmented labels are comparable to positives, such that ours outperform state-of-the-arts without augmentation.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.