T7: The Amazing World of Neural Language Generation

Yangfeng Ji, Antoine Bosselut, Thomas Wolf, Asli Celikyilmaz

Live Session 1: Nov 20, Live Session 1: Nov 20 (19:00-20:00 UTC) [Join Zoom Meeting]
Live Session 2: Nov 21, Live Session 2: Nov 21 (01:00-02:00 UTC) [Join Zoom Meeting]
Abstract: Neural Language Generation (NLG) -- using neural network models to generate coherent text -- is among the most promising methods for automated text creation. Recent years have seen a paradigm shift in neural text generation, caused by the advances in deep contextual language modeling (e.g., LSTMs, GPT, GPT2) and transfer learning (e.g., ELMo, BERT). While these tools have dramatically improved the state of NLG, particularly for low resources tasks, state-of-the-art NLG models still face many challenges: a lack of diversity in generated text, commonsense violations in depicted situations, difficulties in making use of factual information, and difficulties in designing reliable evaluation metrics. In this tutorial, we will present an overview of the current state-of-the-art in neural network architectures, and how they shaped recent research directions in text generation. We will discuss how and why these models succeed/fail at generating coherent text, and provide insights on several applications.

Time Event Hosts
Nov 20, (19:00-20:00 UTC) Q&A Yangfeng Ji, Antoine Bosselut, Thomas Wolf and Asli Celikyilmaz
Nov 21, (01:00-02:00 UTC) Q&A
Information about the virtual format of this tutorial: This tutorial has a prerecorded talk on this page (see below) that you can watch anytime during the conference. It also has two live sessions that will be conducted on Zoom and will be livestreamed on this page. Additionally, it has a chat window that you can use to have discussions with the tutorial teachers and other attendees anytime during the conference.