Adaptive trust calibration for human-AI collaboration.

Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping pr...

Full description

Bibliographic Details
Main Authors: Kazuo Okamura, Seiji Yamada
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2020-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0229132
id doaj-6e4338f856fe478bb2a28fd713e3c0bc
record_format Article
spelling doaj-6e4338f856fe478bb2a28fd713e3c0bc2021-03-03T21:32:31ZengPublic Library of Science (PLoS)PLoS ONE1932-62032020-01-01152e022913210.1371/journal.pone.0229132Adaptive trust calibration for human-AI collaboration.Kazuo OkamuraSeiji YamadaSafety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user's reliance behavior and cognitive cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone's automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.https://doi.org/10.1371/journal.pone.0229132
collection DOAJ
language English
format Article
sources DOAJ
author Kazuo Okamura
Seiji Yamada
spellingShingle Kazuo Okamura
Seiji Yamada
Adaptive trust calibration for human-AI collaboration.
PLoS ONE
author_facet Kazuo Okamura
Seiji Yamada
author_sort Kazuo Okamura
title Adaptive trust calibration for human-AI collaboration.
title_short Adaptive trust calibration for human-AI collaboration.
title_full Adaptive trust calibration for human-AI collaboration.
title_fullStr Adaptive trust calibration for human-AI collaboration.
title_full_unstemmed Adaptive trust calibration for human-AI collaboration.
title_sort adaptive trust calibration for human-ai collaboration.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2020-01-01
description Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user's reliance behavior and cognitive cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone's automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.
url https://doi.org/10.1371/journal.pone.0229132
work_keys_str_mv AT kazuookamura adaptivetrustcalibrationforhumanaicollaboration
AT seijiyamada adaptivetrustcalibrationforhumanaicollaboration
_version_ 1714816383246663680