Encoding Multiple Sensor Data for Robotic Learning Skills From Multimodal Demonstration
Learning a task such as pushing something, where the constraints of both position and force have to be satisfied, is usually difficult for a collaborative robot. In this work, we propose a multimodal teaching-by-demonstration system which can enable the robot to perform this kind of tasks. The basic...
Main Authors: | Chao Zeng, Chenguang Yang, Junpei Zhong, Jianwei Zhang |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8856213/ |
Similar Items
-
Advancement of Robots With Double Encoders for Industrial and Collaborative Applications
by: Stanislav Mikhel, et al.
Published: (2018-11-01) -
Planning and Sequencing Through Multimodal Interaction for Robot Programming
by: Akan, Batu
Published: (2014) -
Predictive Methodology for Dimensional Path Precision in Robotic Machining Operations
by: I. Iglesias, et al.
Published: (2018-01-01) -
An Incremental Learning Framework to Enhance Teaching by Demonstration Based on Multimodal Sensor Fusion
by: Jie Li, et al.
Published: (2020-08-01) -
A Flexible Multimodal Sole Sensor for Legged Robot Sensing Complex Ground Information during Locomotion
by: Yingtian Xu, et al.
Published: (2021-08-01)