Learning VAE-LDA Models with Rounded Reparameterization Trick
Runzhi Tian, Yongyi Mao, Richong Zhang
Machine Learning for NLP Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.
Connected Papers in EMNLP2020
Similar Papers
Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space
Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang, Jianfeng Gao,

Short Text Topic Modeling with Topic Distribution Quantization and Negative Sampling Decoder
Xiaobao Wu, Chunping Li, Yan Zhu, Yishu Miao,

Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models
Alexander Terenin, Måns Magnusson, Leif Jonsson,
