Generalization in adaptation to stable and unstable dynamics.

Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for ev...

Full description

Bibliographic Details
Main Authors: Abdelhamid Kadiallah, David W Franklin, Etienne Burdet
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2012-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC3466288?pdf=render
id doaj-1392bccf8cc4499b85cd9da4c1f0d578
record_format Article
spelling doaj-1392bccf8cc4499b85cd9da4c1f0d5782020-11-25T02:55:56ZengPublic Library of Science (PLoS)PLoS ONE1932-62032012-01-01710e4507510.1371/journal.pone.0045075Generalization in adaptation to stable and unstable dynamics.Abdelhamid KadiallahDavid W FranklinEtienne BurdetHumans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization.http://europepmc.org/articles/PMC3466288?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Abdelhamid Kadiallah
David W Franklin
Etienne Burdet
spellingShingle Abdelhamid Kadiallah
David W Franklin
Etienne Burdet
Generalization in adaptation to stable and unstable dynamics.
PLoS ONE
author_facet Abdelhamid Kadiallah
David W Franklin
Etienne Burdet
author_sort Abdelhamid Kadiallah
title Generalization in adaptation to stable and unstable dynamics.
title_short Generalization in adaptation to stable and unstable dynamics.
title_full Generalization in adaptation to stable and unstable dynamics.
title_fullStr Generalization in adaptation to stable and unstable dynamics.
title_full_unstemmed Generalization in adaptation to stable and unstable dynamics.
title_sort generalization in adaptation to stable and unstable dynamics.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2012-01-01
description Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization.
url http://europepmc.org/articles/PMC3466288?pdf=render
work_keys_str_mv AT abdelhamidkadiallah generalizationinadaptationtostableandunstabledynamics
AT davidwfranklin generalizationinadaptationtostableandunstabledynamics
AT etienneburdet generalizationinadaptationtostableandunstabledynamics
_version_ 1724715340544344064