Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning

Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, Jiliang Tang

Dialog and Interactive Systems Long Paper

Zoom-4D: Nov 17, Zoom-4D: Nov 17 (01:00-02:00 UTC) [Join Zoom Meeting]

You can open the pre-recorded video in a separate window.

Abstract: Dialogue systems play an increasingly important role in various aspects of our daily life. It is evident from recent research that dialogue systems trained on human conversation data are biased. In particular, they can produce responses that reflect people's gender prejudice. Many debiasing methods have been developed for various NLP tasks, such as word embedding. However, they are not directly applicable to dialogue systems because they are likely to force dialogue models to generate similar responses for different genders. This greatly degrades the diversity of the generated responses and immensely hurts the performance of the dialogue models. In this paper, we propose a novel adversarial learning framework Debiased-Chat to train dialogue models free from gender bias while keeping their performance. Extensive experiments on two real-world conversation datasets show that our framework significantly reduces gender bias in dialogue models while maintaining the response quality.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Cross Copy Network for Dialogue Generation
Changzhen Ji, Xin Zhou, Yating Zhang, Xiaozhong Liu, Changlong Sun, Conghui Zhu, Tiejun Zhao,
Generating Dialogue Responses from a Semantic Latent Space
Wei-Jen Ko, Avik Ray, Yilin Shen, Hongxia Jin,
Regularizing Dialogue Generation by Imitating Implicit Scenarios
Shaoxiong Feng, Xuancheng Ren, Hongshen Chen, Bin Sun, Kan Li, Xu Sun,
Structured Attention for Unsupervised Dialogue Structure Induction
Liang Qiu, Yizhou Zhao, Weiyan Shi, Yuan Liang, Feng Shi, Tao Yuan, Zhou Yu, Song-Chun Zhu,