Knowledge Grounded Conversational Symptom Detection with Graph Memory Networks

Hongyin Luo, Shang-Wen Li, James Glass

3rd Clinical Natural Language Processing Workshop (Clinical NLP 2020) Workshop Paper

You can open the pre-recorded video in a separate window.

Abstract: In this work, we propose a novel goal-oriented dialog task, automatic symptom detection. We build a system that can interact with patients through dialog to detect and collect clinical symptoms automatically, which can save a doctor’s time interviewing the patient. Given a set of explicit symptoms provided by the patient to initiate a dialog for diagnosing, the system is trained to collect implicit symptoms by asking questions, in order to collect more information for making an accurate diagnosis. After getting the reply from the patient for each question, the system also decides whether current information is enough for a human doctor to make a diagnosis. To achieve this goal, we propose two neural models and a training pipeline for the multi-step reasoning task. We also build a knowledge graph as additional inputs to further improve model performance. Experiments show that our model significantly outperforms the baseline by 4%, discovering 67% of implicit symptoms on average with a limited number of questions.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.