Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference

Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, Kevin Gimpel

Semantics: Sentence-level Semantics, Textual Inference and Other areas Long Paper

Gather-5B: Nov 18, Gather-5B: Nov 18 (18:00-20:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: While discriminative neural network classifiers are generally preferred, recent work has shown advantages of generative classifiers in term of data efficiency and robustness. In this paper, we focus on natural language inference ({NLI}). We propose {G}en{NLI}, a generative classifier for {NLI} tasks, and empirically characterize its performance by comparing it to five baselines, including discriminative models and large-scale pretrained language representation models like {BERT}. We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work (Lewis and Fan, 2019). In particular, we find strong results with a simple unbounded modification to log loss, which we call the ``infinilog loss''. Our experiments show that {GenNLI} outperforms both discriminative and pretrained baselines across several challenging {NLI} experimental settings, including small training sets, imbalanced label distributions, and label noise.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers