Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement

Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit...

Full description

Bibliographic Details
Main Authors: Siyao Hu, Katherine J. Kuchenbecker
Format: Article
Language:English
Published: Hindawi Limited 2019-01-01
Series:Applied Bionics and Biomechanics
Online Access:http://dx.doi.org/10.1155/2019/9765383
id doaj-233a6513585b494e9da004a70d61020c
record_format Article
spelling doaj-233a6513585b494e9da004a70d61020c2021-07-02T07:43:55ZengHindawi LimitedApplied Bionics and Biomechanics1176-23221754-21032019-01-01201910.1155/2019/97653839765383Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object MovementSiyao Hu0Katherine J. Kuchenbecker1Department of Mechanical Engineering and Applied Mechanics and GRASP Laboratory, University of Pennsylvania, Philadelphia 19104, USADepartment of Mechanical Engineering and Applied Mechanics and GRASP Laboratory, University of Pennsylvania, Philadelphia 19104, USALearning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.http://dx.doi.org/10.1155/2019/9765383
collection DOAJ
language English
format Article
sources DOAJ
author Siyao Hu
Katherine J. Kuchenbecker
spellingShingle Siyao Hu
Katherine J. Kuchenbecker
Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
Applied Bionics and Biomechanics
author_facet Siyao Hu
Katherine J. Kuchenbecker
author_sort Siyao Hu
title Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
title_short Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
title_full Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
title_fullStr Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
title_full_unstemmed Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
title_sort hierarchical task-parameterized learning from demonstration for collaborative object movement
publisher Hindawi Limited
series Applied Bionics and Biomechanics
issn 1176-2322
1754-2103
publishDate 2019-01-01
description Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.
url http://dx.doi.org/10.1155/2019/9765383
work_keys_str_mv AT siyaohu hierarchicaltaskparameterizedlearningfromdemonstrationforcollaborativeobjectmovement
AT katherinejkuchenbecker hierarchicaltaskparameterizedlearningfromdemonstrationforcollaborativeobjectmovement
_version_ 1721335548883238912