Matthew Lease

University of Texas at Austin, Amazon Scholar

Date: 23 November 2020

Time: 16:00 (CET)

Title: Adventures in Crowdsourcing: Toward Safer Content Moderation and Better Supporting Complex Annotation Tasks [slides]

Abstract: I'll begin the talk discussing content moderation. While most user content posted on social media is benign, other content, such as violent or adult imagery, must be detected and blocked. Unfortunately, such detection is difficult to automate, due to high accuracy requirements, costs of errors, and nuanced rules for acceptable content. Consequently, social media platforms today rely on a vast workforce of human moderators. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to some moderators. To mitigate such harm, we investigate a set of blur-based moderation interfaces for reducing exposure to disturbing content whilst preserving moderator ability to quickly and accurately flag it. We report experiments with Mechanical Turk workers to measure moderator accuracy, speed, and emotional well-being across six alternative designs. Our key findings show interactive blurring designs can reduce emotional impact without sacrificing moderation accuracy and speed. See our online demo at: http://ir.ischool.utexas.edu/CM/demo/.

The second part of my talk will discuss aggregation modeling. Though many models have been proposed for binary or categorical labels, prior methods do not generalize to complex annotations (e.g., open-ended text, multivariate, or structured responses) without devising new models for each specific task. To obviate the need for task-specific modeling, we propose to model distances between labels, rather than the labels themselves. Our models are largely agnostic to the distance function; we leave it to the requesters to specify an appropriate distance function for their given annotation task. We propose three models of annotation quality, including a Bayesian hierarchical extension of multidimensional scaling which can be trained in an unsupervised or semi-supervised manner. Results show the generality and effectiveness of our models across diverse complex annotation tasks: sequence labeling, translation, syntactic parsing, and ranking.

Speaker Biography: Matthew Lease is an Associate Professor in the School of Information at the University of Texas at Austin, where he is co-leading Good Systems (http://goodsystems.utexas.edu/), an eight-year Grand Challenge to design responsible AI technologies. In addition, Lease is an Amazon Scholar, working on Amazon Mechanical Turk, SageMaker Ground Truth and Augmented Artificial Intelligence (A2I). He also worked previously at CrowdFlower. Lease received the Best Paper award at the 2016 AAAI Human Computation and Crowdsourcing conference, as well as three early career awards for crowdsourcing (NSF, DARPA, IMLS). From 2011-2013, Lease co-organized the National Institute of Standards and Technology (NIST) Text Retrieval Conference (TREC) crowdsourcing track.

Homepage: https://www.ischool.utexas.edu/~ml/.

Video Recording