Target Object Identification and Location Based on Multi-sensor Fusion

<span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: DFKai-SB; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US">For an unknown environment, how to make a...

Full description

Bibliographic Details
Main Authors: Yong Jiang, Hong-Guang Wang, Ning Xi
Format: Article
Language:English
Published: Chinese Institute of Automation Engineers (CIAE) & Taiwan Smart Living Space Association (SMART LISA) 2013-03-01
Series:International Journal of Automation and Smart Technology
Subjects:
Online Access:http://www.ausmt.org/index.php/AUSMT/article/view/171
id doaj-e5e9f62b2ecf4124bcfce897d830e048
record_format Article
collection DOAJ
language English
format Article
sources DOAJ
author Yong Jiang
Hong-Guang Wang
Ning Xi
spellingShingle Yong Jiang
Hong-Guang Wang
Ning Xi
Target Object Identification and Location Based on Multi-sensor Fusion
International Journal of Automation and Smart Technology
multi-sensor fusion
mobile manipulations
object identification and location
camera and laser range finder
author_facet Yong Jiang
Hong-Guang Wang
Ning Xi
author_sort Yong Jiang
title Target Object Identification and Location Based on Multi-sensor Fusion
title_short Target Object Identification and Location Based on Multi-sensor Fusion
title_full Target Object Identification and Location Based on Multi-sensor Fusion
title_fullStr Target Object Identification and Location Based on Multi-sensor Fusion
title_full_unstemmed Target Object Identification and Location Based on Multi-sensor Fusion
title_sort target object identification and location based on multi-sensor fusion
publisher Chinese Institute of Automation Engineers (CIAE) & Taiwan Smart Living Space Association (SMART LISA)
series International Journal of Automation and Smart Technology
issn 2223-9766
publishDate 2013-03-01
description <span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: DFKai-SB; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US">For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously, this is a very challenging question. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from the change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. To overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image which includes pixel depth information, the homogeneous transformation model of the system is built. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object are achieved. </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">In order to</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US"> extract the shape features of the object, a </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">two-step method is introduced, and a </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US">sliced point cloud algorithm is </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">proposed for the </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">preliminary classification of the measurement data of the LRF</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">.</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: DFKai-SB; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US"> The effectiveness of the proposed method is validated by the experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location.</span>
topic multi-sensor fusion
mobile manipulations
object identification and location
camera and laser range finder
url http://www.ausmt.org/index.php/AUSMT/article/view/171
work_keys_str_mv AT yongjiang targetobjectidentificationandlocationbasedonmultisensorfusion
AT hongguangwang targetobjectidentificationandlocationbasedonmultisensorfusion
AT ningxi targetobjectidentificationandlocationbasedonmultisensorfusion
_version_ 1725956938499358720
spelling doaj-e5e9f62b2ecf4124bcfce897d830e0482020-11-24T21:32:34ZengChinese Institute of Automation Engineers (CIAE) & Taiwan Smart Living Space Association (SMART LISA)International Journal of Automation and Smart Technology2223-97662013-03-0131576510.5875/ausmt.v3i1.17173Target Object Identification and Location Based on Multi-sensor FusionYong Jiang0Hong-Guang Wang1Ning Xi2Shenyang Institute of Automation, Chinese Academy of Sciences Chinese Academy of SciencesShenyang Institute of Automation, Chinese Academy of Sciences Chinese Academy of SciencesDepartment of Electrical and Computer Engineering, Michigan State University, USA<span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: DFKai-SB; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US">For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously, this is a very challenging question. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF) for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from the change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. To overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image which includes pixel depth information, the homogeneous transformation model of the system is built. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object are achieved. </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">In order to</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US"> extract the shape features of the object, a </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">two-step method is introduced, and a </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US">sliced point cloud algorithm is </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">proposed for the </span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: Batang; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">preliminary classification of the measurement data of the LRF</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: 宋体; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">.</span><span style="font-family: &quot;Times New Roman&quot;,&quot;serif&quot;; font-size: 10pt; mso-fareast-font-family: DFKai-SB; mso-ansi-language: EN-US; mso-fareast-language: IT; mso-bidi-language: AR-SA;" lang="EN-US"> The effectiveness of the proposed method is validated by the experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location.</span>http://www.ausmt.org/index.php/AUSMT/article/view/171multi-sensor fusionmobile manipulationsobject identification and locationcamera and laser range finder