Nov 20, (13:00-13:10 UTC)
|
Opening Remarks |
Zeerak Waseem |
Nov 20, (13:10-13:55 UTC)
|
Keynote TBD - André Brock Zoom link 1 |
Bertie Vidgen |
Nov 20, (13:55-14:40 UTC)
|
Keynote "100,000 Lines in the Sand": Agency and Policy Feedback in Content Moderation - Alex Hanna and Maliha Ahmed Zoom link 1 |
Bertie Vidgen |
Nov 20, (14:40-14:45 UTC)
|
Break |
TBD |
Nov 20, (14:45-15:30 UTC)
|
Keynote "The Root Of Algorithmic Bias And How To Deal With It" - Maria Rodriguez Zoom link 1 |
Bertie Vidgen |
Nov 20, (15:30-15:40 UTC)
|
Break |
TBD |
Nov 20, (15:40-16:40 UTC)
|
Keynote Panel: Maliha Ahmed, André Brock, Alex Hanna and Maria Rodriguez Zoom link 1 |
Bertie Vidgen |
Nov 20, (16:40-17:00 UTC)
|
Break |
TBD |
Nov 20, (17:00-17:45 UTC)
|
Paper Q&A Panels I (3 parallel sessions) |
Zeerak Waseem |
Nov 20, (17:00-17:45 UTC)
|
Session 1, Panel I: Methods for classifying online abuse Zoom link 1
• A Novel Methodology for Developing Automatic Harassment Classifiers for Twitter (Ishaan Arora, Julia Guo, Sarah Ita Levitan, Susan McGregor and Julia Hirschberg)
• Using Transfer-based Language Models to Detect Hateful and Offensive Language Online (Vebjørn Isaksen and Björn Gambäck)
• Fine-tuning BERT for multi-domain and multi-label incivil language detection (Kadir Bulut Ozler, Kate Kenski, Steve Rains, Yotam Shmargad, Kevin Coe and Steven Bethard)
• HurtBERT: Incorporating Lexical Features with BERT for the Detection of Abusive Language (Anna Koufakou, Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti)
• Abusive Language Detection using Syntactic Dependency Graphs (Kanika Narang and Chris Brew) |
Zeerak Waseem |
Nov 20, (17:00-17:45 UTC)
|
Session 1, Panel II: Biases in datasets for abuse Zoom link 2
• Impact of politically biased data on hate speech classification (Maximilian Wich, Jan Bauer and Georg Groh)
• Identifying and Measuring Annotator Bias Based on Annotators’ Demographic Characteristics (Hala Al Kuwatly, Maximilian Wich and Georg Groh)
• Investigating Annotator Bias with a Graph-Based Approach (Maximilian Wich, Hala Al Kuwatly and Georg Groh)
• Reducing Unintended Identity Bias in Russian Hate Speech Detection (Nadezhda Zueva, Madina Kabirova and Pavel Kalaidin)
• Investigating Sampling Bias in Abusive Language Detection (Dante Razo and Sandra Kübler)
• Is your toxicity my toxicity? Understanding the influence of rater identity on perceptions of toxicity (Ian Kivlichan, Olivia Redfield, Rachel Rosen, Raquel Saxe, Nitesh Goyal and Lucy Vasserman) |
Zeerak Waseem |
Nov 20, (17:00-17:45 UTC)
|
Session 1, Panel III: Technical challenges in classifying online abuse Zoom link 3
• Attending the Emotions to Detect Online Abusive Language (Niloofar Safi Samghabadi, Afsheen Hatami, Mahsa Shafaei, Sudipta Kar and Thamar Solorio)
• Enhancing the Identification of Cyberbullying through Participant Roles (Gathika Rathnayake, Thushari Atapattu, Mahen Herath, Georgia Zhang and Katrina Falkner)
• Developing a New Classifier for Automated Identification of Incivility in Social Media (Sam Davidson, Qiusi Sun and Magdalena Wojcieszak)
• [Findings] Hybrid Emoji-Based Masked Language Models for Zero-Shot Abusive Language Detection (Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli and Serena Villata)
• Countering hate on social media: Large scale classification of hate and counter speech (Joshua Garland, Keyan Ghazi-Zahedi, Jean-Gabriel Young, Laurent HébertDufresne and Mirta Galesic) |
Zeerak Waseem |
Nov 20, (17:45-18:00 UTC)
|
Break |
TBD |
Nov 20, (18:00-18:45 UTC)
|
Paper Q&A Panels II (3 parallel sessions) |
Zeerak Waseem |
Nov 20, (18:00-18:45 UTC)
|
Session 2, Panel IV: Ways of tackling online abuse Zoom link 1
• Moderating Our (Dis)Content: Renewing the Regulatory Approach (Claire Pershan)
• Investigating takedowns of misogynistic abuse on Twitter: a story of iterative failures and partial successes (Nicolas Suzor and Rosalie Gillett)
• Six Attributes of Unhealthy Conversations (Ilan Price, Jordan Gifford-Moore, Jory Fleming, Saul Musker, Maayan Roichman, Guillaume Sylvain, Nithum Thain, Lucas Dixon and Jeffrey Sorensen)
• Free Expression By Design: Improving In-Platform Features & Third-Party Tools to Tackle Online Abuse (Viktorya Vilk, Elodie Vialle and Matt Bailey)
• A Unified Taxonomy of Harmful Content (Michele Banko, Brendon MacKeen and Laurie Ray)
|
Zeerak Waseem |
Nov 20, (18:00-18:45 UTC)
|
Session 2, Panel V: New datasets for abuse Zoom link 2
• Towards a Comprehensive Taxonomy and Large-Scale Annotated Corpus for Online Slur Usage (Jana Kurrek, Haji Mohammad Saleem and Derek Ruths)
• In Data We Trust: A Critical Analysis of Hate Speech Detection Datasets (Kosisochukwu Madukwe, Xiaoying Gao and Bing Xue)
• Detecting East Asian Prejudice on Social Media (Bertie Vidgen, Scott Hale, Ella Guest, Helen Margetts, David Broniatowski, Zeerak Waseem, Austin Botelho, Matthew Hall and Rebekah Tromble)
• On Cross-Dataset Generalization in Automatic Detection of Online Abuse Isar Nejadgholi and Svetlana Kiritchenko
• [Findings] A little goes a long way: Improving toxic language classification despite data scarcity Mika Juuti, Tommi Gröndahl, Adrian Flanagan and N. Asokan |
Zeerak Waseem |
Nov 20, (18:45-19:00 UTC)
|
Break |
TBD |
Nov 20, (19:00-19:20 UTC)
|
Online Abuse and Human Rights: Report on WOAH RightsCon Satellite Session Zoom link 1 |
Vinodkumar Prabhakaran |
Nov 20, (19:20-19:30 UTC)
|
Closing Remarks Zoom link 1 |
Vinodkumar Prabhakaran |