Debiasing knowledge graph embeddings
Joseph Fisher, Arpit Mittal, Dave Palfrey, Christos Christodoulopoulos
Machine Learning for NLP Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
It has been shown that knowledge graph embeddings encode potentially harmful social biases, such as the information that women are more likely to be nurses, and men more likely to be bankers. As graph embeddings begin to be used more widely in NLP pipelines, there is a need to develop training methods which remove such biases. Previous approaches to this problem both significantly increase the training time, by a factor of eight or more, and decrease the accuracy of the model substantially. We present a novel approach, in which all embeddings are trained to be neutral to sensitive attributes such as gender by default using an adversarial loss. We then add sensitive attributes back on in whitelisted cases. Training time only marginally increases over a baseline model, and the debiased embeddings perform almost as accurately in the triple prediction task as their non-debiased counterparts.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.
Connected Papers in EMNLP2020
Similar Papers
More Bang for Your Buck: Natural Perturbation for Robust Question Answering
Daniel Khashabi, Tushar Khot, Ashish Sabharwal,

New Protocols and Negative Results for Textual Entailment Data Collection
Samuel R. Bowman, Jennimaria Palomaki, Livio Baldini Soares, Emily Pitler,

Adversarial Self-Supervised Data-Free Distillation for Text Classification
Xinyin Ma, Yongliang Shen, Gongfan Fang, Chen Chen, Chenghao Jia, Weiming Lu,
