O2A: One-Shot Observational Learning with Action Vectors

We present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a p...

Full description

Bibliographic Details
Main Authors: Leo Pauly, Wisdom C. Agboh , David C. Hogg , Raul Fuentes 
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-08-01
Series:Frontiers in Robotics and AI
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frobt.2021.686368/full
id doaj-665c71eac570445f8a03a3e3f6cfdd1e
record_format Article
spelling doaj-665c71eac570445f8a03a3e3f6cfdd1e2021-08-02T08:11:14ZengFrontiers Media S.A.Frontiers in Robotics and AI2296-91442021-08-01810.3389/frobt.2021.686368686368O2A: One-Shot Observational Learning with Action VectorsLeo Pauly0Wisdom C. Agboh 1David C. Hogg 2Raul Fuentes 3University of Leeds, Leeds, United KingdomUniversity of Leeds, Leeds, United KingdomUniversity of Leeds, Leeds, United KingdomRWTH Aachen University, Aachen, GermanyWe present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call “action vectors”. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O2A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website.https://www.frontiersin.org/articles/10.3389/frobt.2021.686368/fullobservational learningvisual perceptionreinforcement learningtransfer learningrobotic manipulation
collection DOAJ
language English
format Article
sources DOAJ
author Leo Pauly
Wisdom C. Agboh 
David C. Hogg 
Raul Fuentes 
spellingShingle Leo Pauly
Wisdom C. Agboh 
David C. Hogg 
Raul Fuentes 
O2A: One-Shot Observational Learning with Action Vectors
Frontiers in Robotics and AI
observational learning
visual perception
reinforcement learning
transfer learning
robotic manipulation
author_facet Leo Pauly
Wisdom C. Agboh 
David C. Hogg 
Raul Fuentes 
author_sort Leo Pauly
title O2A: One-Shot Observational Learning with Action Vectors
title_short O2A: One-Shot Observational Learning with Action Vectors
title_full O2A: One-Shot Observational Learning with Action Vectors
title_fullStr O2A: One-Shot Observational Learning with Action Vectors
title_full_unstemmed O2A: One-Shot Observational Learning with Action Vectors
title_sort o2a: one-shot observational learning with action vectors
publisher Frontiers Media S.A.
series Frontiers in Robotics and AI
issn 2296-9144
publishDate 2021-08-01
description We present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call “action vectors”. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O2A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website.
topic observational learning
visual perception
reinforcement learning
transfer learning
robotic manipulation
url https://www.frontiersin.org/articles/10.3389/frobt.2021.686368/full
work_keys_str_mv AT leopauly o2aoneshotobservationallearningwithactionvectors
AT wisdomcagboh o2aoneshotobservationallearningwithactionvectors
AT davidchogg o2aoneshotobservationallearningwithactionvectors
AT raulfuentes o2aoneshotobservationallearningwithactionvectors
_version_ 1721238711124885504