RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text

Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Chris Callison-Burch

Demo Paper

Gather-5J: Nov 18, Gather-5J: Nov 18 (18:00-20:00 UTC) [Join Gather Meeting]

Abstract: In recent years, large neural networks for natural language generation (NLG) have made leaps and bounds in their ability to generate fluent text. However, the tasks of evaluating quality differences between NLG systems and understanding how humans perceive the generated text remain both crucial and difficult. In this system demonstration, we present Real or Fake Text (RoFT), a website that tackles both of these challenges by inviting users to try their hand at detecting machine-generated text in a variety of domains. We introduce a novel evaluation task based on detecting the boundary at which a text passage that starts off human-written transitions to being machine-generated. We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.

Similar Papers

Authorship Attribution for Neural Text Generation
Adaku Uchendu, Thai Le, Kai Shu, Dongwon Lee,
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh,
Sparse Text Generation
Pedro Henrique Martins, Zita Marinho, André F. T. Martins,