Learning VAE-LDA Models with Rounded Reparameterization Trick
Runzhi Tian, Yongyi Mao, Richong Zhang
Machine Learning for NLP Long Paper
You can open the pre-recorded video in a separate window.
Abstract:
The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.