Guiding Attention for Self-Supervised Learning with Transformers

Ameet Deshpande, Karthik Narasimhan

SustaiNLP: Workshop on Simple and Efficient Natural Language Processing Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: In this paper, we propose a simple and effective technique to allow for efficient self-supervised learning with bi-directional Transformers. Our approach is motivated by recent studies demonstrating that self-attention patterns in trained models contain a majority of non-linguistic regularities. We propose a computationally efficient auxiliary loss function to guide attention heads to conform to such patterns. Our method is agnostic to the actual pre-training objective and results in faster convergence of models as well as better performance on downstream tasks compared to the baselines, achieving state of the art results in low-resource settings. Surprisingly, we also find that linguistic properties of attention heads are not necessarily correlated with language modeling performance.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.