BioMegatron: Larger Biomedical Domain Language Model

Hoo-Chang Shin, Yang Zhang, Evelina Bakhturina, Raul Puri, Mostofa Patwary, Mohammad Shoeybi, Raghav Mani

NLP Applications Short Paper

Gather-3B: Nov 17, Gather-3B: Nov 17 (18:00-20:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books. Yet, most works do not study the factors affecting each domain language application deeply. Additionally, the study of model size on domain-specific models has been mostly missing. We empirically study and evaluate several factors that can affect performance on domain language applications, such as the sub-word vocabulary set, model size, pre-training corpus, and domain transfer. We show consistent improvements on benchmarks with our larger BioMegatron model trained on a larger domain corpus, contributing to our understanding of domain language model applications. We demonstrate noticeable improvements over the previous state-of-the-art (SOTA) on standard biomedical NLP benchmarks of question answering, named entity recognition, and relation extraction. Code and checkpoints to reproduce our experiments are available at [github.com/NVIDIA/NeMo].
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Improving Low Compute Language Modeling with In-Domain Embedding Initialisation
Charles Welch, Rada Mihalcea, Jonathan K. Kummerfeld,
COMETA: A Corpus for Medical Entity Linking in the Social Media
Marco Basaldella, Fangyu Liu, Ehsan Shareghi, Nigel Collier,
Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining
Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He,
Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text
Dongfang Li, Baotian Hu, Qingcai Chen, Weihua Peng, Anqi Wang,