Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.
Anirudh Joshi, Namit Katariya, Xavier Amatriain, Anitha Kannan
3rd Clinical Natural Language Processing Workshop (Clinical NLP 2020) Workshop Paper
You can open the pre-recorded video in a separate window.
Abstract:
Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domain-specific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient’s medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.