Explainable Clinical Decision Support from Text
Jinyue Feng, Chantal Shaib, Frank Rudzicz
NLP Applications Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
Clinical prediction models often use structured variables and provide outcomes that are not readily interpretable by clinicians. Further, free-text medical notes may contain information not immediately available in structured variables. We propose a hierarchical CNN-transformer model with explicit attention as an interpretable, multi-task clinical language model, which achieves an AUROC of 0.75 and 0.78 on sepsis and mortality prediction, respectively. We also explore the relationships between learned features from structured and unstructured variables using projection-weighted canonical correlation analysis. Finally, we outline a protocol to evaluate model usability in a clinical decision support context. From domain-expert evaluations, our model generates informative rationales that have promising real-life applications.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.