Designing effective interfaces for motivating engagement in crowdsourced image labeling

Crowdsourcing has been established as a viable solution for rapidly analyzing large amounts of data that require human judgment. Within this area, image labeling tasks have gained wide adoption across different disciplines, as a popular and cost-effective solution for performing image analysis. Nota...

Full description

Bibliographic Details
Published:
Online Access:http://hdl.handle.net/2047/D20398326
id ndltd-NEU--neu-bz60d000z
record_format oai_dc
spelling ndltd-NEU--neu-bz60d000z2021-05-26T05:11:04ZDesigning effective interfaces for motivating engagement in crowdsourced image labelingCrowdsourcing has been established as a viable solution for rapidly analyzing large amounts of data that require human judgment. Within this area, image labeling tasks have gained wide adoption across different disciplines, as a popular and cost-effective solution for performing image analysis. Notable examples include disaster response and monitoring, environmental justice, animal preservation and biomedical processing. However, the repetitive and tedious nature of such tasks poses a significant challenge for project creators to retain participants, as they become disengaged early in the process. This can not only lead to labels from fewer participants, due to small groups doing most of the work, but also poses a threat for smaller projects to gain traction and collect enough data, or succeed in their awareness and volunteer recruitment efforts. This thesis tackles crowd disengagement in image labeling tasks along three main directions. The first part of the thesis focuses on how traditional interfaces can be enhanced to motivate engagement. We start by exploring the interplay of paid versus volunteer work, as well as volunteer motivations, which inform our interface design. We then focus on task variety, a non-monetary approach towards creating motivating work. Within this traditional image labeling setting, we explore the effects of applying fixed scheduling schemes for increasing engagement on the task. Injecting game mechanics to create more interactive image labeling interfaces is the topic of the second part of this thesis. We explore image matching game mechanics in online image labeling and how these can lead to more engaging participant experiences. We again explore elements of task variety, as well as difficulty adjustment, for tackling disengagement, by deploying Reinforcement Learning based approaches towards designing variable and adaptive scheduling mechanisms. Finally, the third part of the thesis examines co-location in image labeling, via a multi-person tabletop image labeling game toolkit. We explore the interplay between collaboration and competition and its capacity to motivate deeper engagement from participants, by fostering strong discussions. We compare different rulesets and settings, aiming for achieving high engagement levels across participants with different backgrounds. Our findings from interviews with participants, along with observations, further inform the design of our toolkit. These research directions are addressed by building an open-source crowdsourcing framework, which is comprised of three different image labeling tools that drive this thesis: 1) a classic image labeling interface called Cartoscope, 2) an image matching game called Tile-o-Scope Grid and 3) an Augmented Reality (AR) tabletop image labeling toolkit called Tile-o-Scope AR. These implementations are evaluated via user studies, including experiments on Amazon Mechanical Turk, a popular crowdsourcing marketplace.http://hdl.handle.net/2047/D20398326
collection NDLTD
sources NDLTD
description Crowdsourcing has been established as a viable solution for rapidly analyzing large amounts of data that require human judgment. Within this area, image labeling tasks have gained wide adoption across different disciplines, as a popular and cost-effective solution for performing image analysis. Notable examples include disaster response and monitoring, environmental justice, animal preservation and biomedical processing. However, the repetitive and tedious nature of such tasks poses a significant challenge for project creators to retain participants, as they become disengaged early in the process. This can not only lead to labels from fewer participants, due to small groups doing most of the work, but also poses a threat for smaller projects to gain traction and collect enough data, or succeed in their awareness and volunteer recruitment efforts. This thesis tackles crowd disengagement in image labeling tasks along three main directions. The first part of the thesis focuses on how traditional interfaces can be enhanced to motivate engagement. We start by exploring the interplay of paid versus volunteer work, as well as volunteer motivations, which inform our interface design. We then focus on task variety, a non-monetary approach towards creating motivating work. Within this traditional image labeling setting, we explore the effects of applying fixed scheduling schemes for increasing engagement on the task. Injecting game mechanics to create more interactive image labeling interfaces is the topic of the second part of this thesis. We explore image matching game mechanics in online image labeling and how these can lead to more engaging participant experiences. We again explore elements of task variety, as well as difficulty adjustment, for tackling disengagement, by deploying Reinforcement Learning based approaches towards designing variable and adaptive scheduling mechanisms. Finally, the third part of the thesis examines co-location in image labeling, via a multi-person tabletop image labeling game toolkit. We explore the interplay between collaboration and competition and its capacity to motivate deeper engagement from participants, by fostering strong discussions. We compare different rulesets and settings, aiming for achieving high engagement levels across participants with different backgrounds. Our findings from interviews with participants, along with observations, further inform the design of our toolkit. These research directions are addressed by building an open-source crowdsourcing framework, which is comprised of three different image labeling tools that drive this thesis: 1) a classic image labeling interface called Cartoscope, 2) an image matching game called Tile-o-Scope Grid and 3) an Augmented Reality (AR) tabletop image labeling toolkit called Tile-o-Scope AR. These implementations are evaluated via user studies, including experiments on Amazon Mechanical Turk, a popular crowdsourcing marketplace.
title Designing effective interfaces for motivating engagement in crowdsourced image labeling
spellingShingle Designing effective interfaces for motivating engagement in crowdsourced image labeling
title_short Designing effective interfaces for motivating engagement in crowdsourced image labeling
title_full Designing effective interfaces for motivating engagement in crowdsourced image labeling
title_fullStr Designing effective interfaces for motivating engagement in crowdsourced image labeling
title_full_unstemmed Designing effective interfaces for motivating engagement in crowdsourced image labeling
title_sort designing effective interfaces for motivating engagement in crowdsourced image labeling
publishDate
url http://hdl.handle.net/2047/D20398326
_version_ 1719406519214145536