Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder
John Pougué Biyong, Bo Wang, Terry Lyons, Alejo Nevado-Holgado
3rd Clinical Natural Language Processing Workshop (Clinical NLP 2020) Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.