Be More with Less: Hypergraph Attention Networks for Inductive Text Classification

Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, Huan Liu

Machine Learning for NLP Long Paper

Gather-3C: Nov 17, Gather-3C: Nov 17 (18:00-20:00 UTC) [Join Gather Meeting]

You can open the pre-recorded video in a separate window.

Abstract: Text classification is a critical research topic with broad applications in natural language processing. Recently, graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task. Despite the success, their performance could be largely jeopardized in practice since they are: (1) unable to capture high-order interaction between words; (2) inefficient to handle large datasets and new documents. To address those issues, in this paper, we propose a principled model -- hypergraph attention networks (HyperGAT), which can obtain more expressive power with less computational consumption for text representation learning. Extensive experiments on various benchmark datasets demonstrate the efficacy of the proposed approach on the text classification task.
NOTE: Video may display a random order of authors. Correct author list is at the top of this page.

Connected Papers in EMNLP2020

Similar Papers

Is Graph Structure Necessary for Multi-hop Question Answering?
Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu,
Text Segmentation by Cross Segment Attention
Michal Lukasik, Boris Dadachev, Kishore Papineni, Gonçalo Simões,
Less is More: Attention Supervision with Counterfactuals for Text Classification
Seungtaek Choi, Haeju Park, Jinyoung Yeo, Seung-won Hwang,
Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction
Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz,