Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System
Three-dimensional reconstruction and semantic understandings have attracted extensive attention in recent years. However, current reconstruction techniques mainly target large-scale scenes, such as an indoor environment or automatic self-driving cars. There are few studies on small-scale and high-pr...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-02-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/3/1183 |
id |
doaj-cdb7ad627fce4399886219d6ef3f6a0c |
---|---|
record_format |
Article |
spelling |
doaj-cdb7ad627fce4399886219d6ef3f6a0c2020-11-25T01:40:00ZengMDPI AGApplied Sciences2076-34172020-02-01103118310.3390/app10031183app10031183Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision SystemFusheng Zha0Yu Fu1Pengfei Wang2Wei Guo3Mantian Li4Xin Wang5Hegao Cai6State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaShenzhen Academy of Aerospace Technology, Shenzhen 518057, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, ChinaThree-dimensional reconstruction and semantic understandings have attracted extensive attention in recent years. However, current reconstruction techniques mainly target large-scale scenes, such as an indoor environment or automatic self-driving cars. There are few studies on small-scale and high-precision scene reconstruction for manipulator operation, which plays an essential role in the decision-making and intelligent control system. In this paper, a group of images captured from an eye-in-hand vision system carried on a robotic manipulator are segmented by deep learning and geometric features and create a semantic 3D reconstruction using a map stitching method. The results demonstrate that the quality of segmented images and the precision of semantic 3D reconstruction are effectively improved by our method.https://www.mdpi.com/2076-3417/10/3/1183semantic 3d reconstructioneye-in-hand vision systemrobotic manipulator |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Fusheng Zha Yu Fu Pengfei Wang Wei Guo Mantian Li Xin Wang Hegao Cai |
spellingShingle |
Fusheng Zha Yu Fu Pengfei Wang Wei Guo Mantian Li Xin Wang Hegao Cai Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System Applied Sciences semantic 3d reconstruction eye-in-hand vision system robotic manipulator |
author_facet |
Fusheng Zha Yu Fu Pengfei Wang Wei Guo Mantian Li Xin Wang Hegao Cai |
author_sort |
Fusheng Zha |
title |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System |
title_short |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System |
title_full |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System |
title_fullStr |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System |
title_full_unstemmed |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System |
title_sort |
semantic 3d reconstruction for robotic manipulators with an eye-in-hand vision system |
publisher |
MDPI AG |
series |
Applied Sciences |
issn |
2076-3417 |
publishDate |
2020-02-01 |
description |
Three-dimensional reconstruction and semantic understandings have attracted extensive attention in recent years. However, current reconstruction techniques mainly target large-scale scenes, such as an indoor environment or automatic self-driving cars. There are few studies on small-scale and high-precision scene reconstruction for manipulator operation, which plays an essential role in the decision-making and intelligent control system. In this paper, a group of images captured from an eye-in-hand vision system carried on a robotic manipulator are segmented by deep learning and geometric features and create a semantic 3D reconstruction using a map stitching method. The results demonstrate that the quality of segmented images and the precision of semantic 3D reconstruction are effectively improved by our method. |
topic |
semantic 3d reconstruction eye-in-hand vision system robotic manipulator |
url |
https://www.mdpi.com/2076-3417/10/3/1183 |
work_keys_str_mv |
AT fushengzha semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT yufu semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT pengfeiwang semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT weiguo semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT mantianli semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT xinwang semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem AT hegaocai semantic3dreconstructionforroboticmanipulatorswithaneyeinhandvisionsystem |
_version_ |
1725047677033381888 |