Verbal explanations by collaborating robot teams

In this article, we present work on collaborating robot teams that use verbal explanations of their actions and intentions in order to be more understandable to the human. For this, we introduce a mechanism that determines what information the robots should verbalize in accordance with Grice’s maxim...

Full description

Bibliographic Details
Main Authors: Singh Avinash Kumar, Baranwal Neha, Richter Kai-Florian, Hellström Thomas, Bensch Suna
Format: Article
Language:English
Published: De Gruyter 2020-11-01
Series:Paladyn: Journal of Behavioral Robotics
Subjects:
Online Access:https://doi.org/10.1515/pjbr-2021-0001
id doaj-f7f3509a94504aea951b6f330418e014
record_format Article
spelling doaj-f7f3509a94504aea951b6f330418e0142021-10-03T07:42:42ZengDe GruyterPaladyn: Journal of Behavioral Robotics2081-48362020-11-01121475710.1515/pjbr-2021-0001pjbr-2021-0001Verbal explanations by collaborating robot teamsSingh Avinash Kumar0Baranwal Neha1Richter Kai-Florian2Hellström Thomas3Bensch Suna4Department of Computing Science, Umeå University, SwedenDepartment of Computing Science, Umeå University, SwedenDepartment of Computing Science, Umeå University, SwedenDepartment of Computing Science, Umeå University, SwedenDepartment of Computing Science, Umeå University, SwedenIn this article, we present work on collaborating robot teams that use verbal explanations of their actions and intentions in order to be more understandable to the human. For this, we introduce a mechanism that determines what information the robots should verbalize in accordance with Grice’s maxim of quantity, i.e., convey as much information as is required and no more or less. Our setup is a robot team collaborating to achieve a common goal while explaining in natural language what they are currently doing and what they intend to do. The proposed approach is implemented on three Pepper robots moving objects on a table. It is evaluated by human subjects answering a range of questions about the robots’ explanations, which are generated using either our proposed approach or two further approaches implemented for evaluation purposes. Overall, we find that our proposed approach leads to the most understanding of what the robots are doing. In addition, we further propose a method for incorporating policies driving the distribution of tasks among the robots, which may further support understandability.https://doi.org/10.1515/pjbr-2021-0001understandable robotsrobot teamsexplainable aihuman-robot interactionnatural language generationgrice’s maxim of quantityinformativeness
collection DOAJ
language English
format Article
sources DOAJ
author Singh Avinash Kumar
Baranwal Neha
Richter Kai-Florian
Hellström Thomas
Bensch Suna
spellingShingle Singh Avinash Kumar
Baranwal Neha
Richter Kai-Florian
Hellström Thomas
Bensch Suna
Verbal explanations by collaborating robot teams
Paladyn: Journal of Behavioral Robotics
understandable robots
robot teams
explainable ai
human-robot interaction
natural language generation
grice’s maxim of quantity
informativeness
author_facet Singh Avinash Kumar
Baranwal Neha
Richter Kai-Florian
Hellström Thomas
Bensch Suna
author_sort Singh Avinash Kumar
title Verbal explanations by collaborating robot teams
title_short Verbal explanations by collaborating robot teams
title_full Verbal explanations by collaborating robot teams
title_fullStr Verbal explanations by collaborating robot teams
title_full_unstemmed Verbal explanations by collaborating robot teams
title_sort verbal explanations by collaborating robot teams
publisher De Gruyter
series Paladyn: Journal of Behavioral Robotics
issn 2081-4836
publishDate 2020-11-01
description In this article, we present work on collaborating robot teams that use verbal explanations of their actions and intentions in order to be more understandable to the human. For this, we introduce a mechanism that determines what information the robots should verbalize in accordance with Grice’s maxim of quantity, i.e., convey as much information as is required and no more or less. Our setup is a robot team collaborating to achieve a common goal while explaining in natural language what they are currently doing and what they intend to do. The proposed approach is implemented on three Pepper robots moving objects on a table. It is evaluated by human subjects answering a range of questions about the robots’ explanations, which are generated using either our proposed approach or two further approaches implemented for evaluation purposes. Overall, we find that our proposed approach leads to the most understanding of what the robots are doing. In addition, we further propose a method for incorporating policies driving the distribution of tasks among the robots, which may further support understandability.
topic understandable robots
robot teams
explainable ai
human-robot interaction
natural language generation
grice’s maxim of quantity
informativeness
url https://doi.org/10.1515/pjbr-2021-0001
work_keys_str_mv AT singhavinashkumar verbalexplanationsbycollaboratingrobotteams
AT baranwalneha verbalexplanationsbycollaboratingrobotteams
AT richterkaiflorian verbalexplanationsbycollaboratingrobotteams
AT hellstromthomas verbalexplanationsbycollaboratingrobotteams
AT benschsuna verbalexplanationsbycollaboratingrobotteams
_version_ 1716845850398818304