A Teleological Approach to Robot Programming by Demonstration

This dissertation presents an approach to robot programming by demonstration based on two key concepts: demonstrator intent is the most meaningful signal that the robot can observe, and the robot should have a basic level of behavioral competency from which to interpret observed actions. Intent is a...

Full description

Bibliographic Details
Main Author: Sweeney, John Douglas
Format: Others
Published: ScholarWorks@UMass Amherst 2011
Subjects:
Online Access:https://scholarworks.umass.edu/open_access_dissertations/351
https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1352&context=open_access_dissertations
Description
Summary:This dissertation presents an approach to robot programming by demonstration based on two key concepts: demonstrator intent is the most meaningful signal that the robot can observe, and the robot should have a basic level of behavioral competency from which to interpret observed actions. Intent is a teleological, robust teaching signal invariant to many common sources of noise in training. The robot can use the knowledge encapsulated in sensorimotor schemas to interpret the demonstration. Furthermore, knowledge gained in prior demonstrations can be applied to future sessions. I argue that programming by demonstration be organized into declarative and pro-cedural components. The declarative component represents a reusable outline of underlying behavior that can be applied to many different contexts. The procedural component represents the dynamic portion of the task that is based on features observed at run time. I describe how statistical models, and Bayesian methods in particular, can be used to model these components. These models have many features that are beneficial for learning in this domain, such as tolerance for uncertainty, and the ability to incorporate prior knowledge into inferences. I demonstrate this architecture through experiments on a bimanual humanoid robot using tasks from the pick and place domain. Additionally, I develop and experimentally validate a model for generating grasp candidates using visual features that is learned from demonstration data. This model is especially useful in the context of pick and place tasks.