We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our model employs two generators only, and does not rely on any discriminators or parallel corpus for training. Both quantitative and qualitative experiments on the Yelp and IMDb datasets show that our model gives competitive performance compared to several strong baselines with more complicated architecture designs.
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.
How it works:
We calculate the number of new messages for every channel in the last N seconds. Then, we sort them descendingly.
Channels with no new messages will be randomly shuffled. Please note that the number of messages might not be accurate.