The Application of Deep Learning in RGB-D Images for the Control of Robot Arm

碩士 === 銘傳大學 === 電腦與通訊工程學系碩士班 === 105 === Robot research is one of the important issues of the development of science and technology, the robotics and artificial intelligence robot development, work completed by the robot is no longer a simple, repetitive movements, but expect the robot has independe...

Full description

Bibliographic Details
Main Authors: DENG, JU-CHIEH, 鄧茹潔
Other Authors: CHIANG, SHU-YIN
Format: Others
Language:zh-TW
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/36348540372513931289
id ndltd-TW-105MCU00650010
record_format oai_dc
spelling ndltd-TW-105MCU006500102017-09-18T05:07:59Z http://ndltd.ncl.edu.tw/handle/36348540372513931289 The Application of Deep Learning in RGB-D Images for the Control of Robot Arm 應用深度學習於RGB-D影像進行機械手臂之控制 DENG, JU-CHIEH 鄧茹潔 碩士 銘傳大學 電腦與通訊工程學系碩士班 105 Robot research is one of the important issues of the development of science and technology, the robotics and artificial intelligence robot development, work completed by the robot is no longer a simple, repetitive movements, but expect the robot has independent thinking ability, increase the application of robots, to improve the practicality, the robot vision has become one of the most critical technology. In Google I/O conference, Google can be seen in the world to promote the Impact Challenge project and Google.org project, bringing together technology and a new team, the use of science and technology to make the world better, including in the limb, hearing impaired and Parkinson's disease patients and other fields. Therefore, the aim of this study is to assist the people with upper limb disabilities in grasping distant objects, such as sports injury, elderly joint degeneration and spinal muscular atrophy, which could lead the upper limbs to move abnormally, the use of robot arms help the people with upper limb disabilities and enhance the convenient of daily life. In this study, RA605 joint robot arm is used to combine the visual images and the application of deep learning, conduct system integration, to achieve the robot arm precise positioning, target recognition, mobile control and grasping the target object. The vision system uses Kinect v2 camera and Logitech C525 camera. The environment image is extracted by Kinect v2 and the deep learning algorithm is used to recognize the target object and obtain the coordinate position of the object. Logitech C525 camera mounted on the sixth joint of the robot, can be rotated with the sixth joint. In order to confirm the position calculated by the above Kinect v2 and capture the image, calculate the clamping position of the target object and control the electric gripper so as to successfully capture the target object. To achieve the goal of assisting the people with upper limb disabilities in grasping distant objects. CHIANG, SHU-YIN 江叔盈 2017 學位論文 ; thesis 59 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 銘傳大學 === 電腦與通訊工程學系碩士班 === 105 === Robot research is one of the important issues of the development of science and technology, the robotics and artificial intelligence robot development, work completed by the robot is no longer a simple, repetitive movements, but expect the robot has independent thinking ability, increase the application of robots, to improve the practicality, the robot vision has become one of the most critical technology. In Google I/O conference, Google can be seen in the world to promote the Impact Challenge project and Google.org project, bringing together technology and a new team, the use of science and technology to make the world better, including in the limb, hearing impaired and Parkinson's disease patients and other fields. Therefore, the aim of this study is to assist the people with upper limb disabilities in grasping distant objects, such as sports injury, elderly joint degeneration and spinal muscular atrophy, which could lead the upper limbs to move abnormally, the use of robot arms help the people with upper limb disabilities and enhance the convenient of daily life. In this study, RA605 joint robot arm is used to combine the visual images and the application of deep learning, conduct system integration, to achieve the robot arm precise positioning, target recognition, mobile control and grasping the target object. The vision system uses Kinect v2 camera and Logitech C525 camera. The environment image is extracted by Kinect v2 and the deep learning algorithm is used to recognize the target object and obtain the coordinate position of the object. Logitech C525 camera mounted on the sixth joint of the robot, can be rotated with the sixth joint. In order to confirm the position calculated by the above Kinect v2 and capture the image, calculate the clamping position of the target object and control the electric gripper so as to successfully capture the target object. To achieve the goal of assisting the people with upper limb disabilities in grasping distant objects.
author2 CHIANG, SHU-YIN
author_facet CHIANG, SHU-YIN
DENG, JU-CHIEH
鄧茹潔
author DENG, JU-CHIEH
鄧茹潔
spellingShingle DENG, JU-CHIEH
鄧茹潔
The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
author_sort DENG, JU-CHIEH
title The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
title_short The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
title_full The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
title_fullStr The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
title_full_unstemmed The Application of Deep Learning in RGB-D Images for the Control of Robot Arm
title_sort application of deep learning in rgb-d images for the control of robot arm
publishDate 2017
url http://ndltd.ncl.edu.tw/handle/36348540372513931289
work_keys_str_mv AT dengjuchieh theapplicationofdeeplearninginrgbdimagesforthecontrolofrobotarm
AT dèngrújié theapplicationofdeeplearninginrgbdimagesforthecontrolofrobotarm
AT dengjuchieh yīngyòngshēndùxuéxíyúrgbdyǐngxiàngjìnxíngjīxièshǒubìzhīkòngzhì
AT dèngrújié yīngyòngshēndùxuéxíyúrgbdyǐngxiàngjìnxíngjīxièshǒubìzhīkòngzhì
AT dengjuchieh applicationofdeeplearninginrgbdimagesforthecontrolofrobotarm
AT dèngrújié applicationofdeeplearninginrgbdimagesforthecontrolofrobotarm
_version_ 1718538152213217280