Early Exiting BERT for Efficient Document Ranking

Ji Xin, Rodrigo Nogueira, Yaoliang Yu, Jimmy Lin

SustaiNLP: Workshop on Simple and Efficient Natural Language Processing Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: Pre-trained language models such as BERT have shown their effectiveness in various tasks. Despite their power, they are known to be computationally intensive, which hinders real-world applications. In this paper, we introduce early exiting BERT for document ranking. With a slight modification, BERT becomes a model with multiple output paths, and each inference sample can exit early from these paths. In this way, computation can be effectively allocated among samples, and overall system latency is significantly reduced while the original quality is maintained. Our experiments on two document ranking datasets demonstrate up to 2.5x inference speedup with minimal quality degradation. The source code of our implementation can be found at https://github.com/castorini/earlyexiting-monobert.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.