none

碩士 === 國立中央大學 === 電機工程學系 === 107 === The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the o...

Full description

Bibliographic Details
Main Authors: Hun-Yen Chang, 張華延
Other Authors: Wen-June Wang
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/h9bzew
id ndltd-TW-107NCU05442045
record_format oai_dc
spelling ndltd-TW-107NCU054420452019-10-22T05:28:12Z http://ndltd.ncl.edu.tw/handle/h9bzew none 基於深度學習與影像處理技術之單眼視覺六軸機械手臂控制 Hun-Yen Chang 張華延 碩士 國立中央大學 電機工程學系 107 The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the objects, in addition, the objects are randomly placed inside the mechanical limit of the robot and satisfy the image size of vision. By receiving the information from vision, the robot can successfully pick and place the object. Robot operating system (ROS) is used to develop a software system under Linux environment in this study. The NVIDIA Jetson TX2, the robot, the industrial camera and the gripper are integrated by ROS distributed architecture and peer-to-peer network, and all information and data collected can be transferred to them as well. Therefore, the collaborative design is used to realize the integrated software and hardware. The most important point is that we use machine vision to detect, identify the target objects and calculate the relative position between each object and the robot arm. We complete the following three steps through the monocular vision from the industrial camera which is mounted on the end of the robot. First, we use deep learning technology to detect and identify objects. Second, we improve the bounding box of deep learning technology result by using Image process technology. Third, we calculate the relative position between the object and the camera by the pinhole camera model. With regard to the robot application, we complete the following tasks. First, the relative position and angle between two different frames are calculated by using forward kinematics. Second, a specific point is set and every joint angle is calculated through inverse kinematics when the robot tool center arrives at the point. Third, we set a virtual environment to prevent collision happening during robot movement. Fourth, we set the joint angle constraints to avoid a major shift at the end of the robot. Fifth, using path constraint to prevent collision between the robot and the target object. Sixth, a series of middle points between the initial point and the target point can be found by using trajectory planning. After the above tasks completed, the robot can implement a randomly pick-and-place application through inverse kinematics under the limitation of vision range and the constraints of the mechanism. Wen-June Wang 王文俊 2019 學位論文 ; thesis 86 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立中央大學 === 電機工程學系 === 107 === The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the objects, in addition, the objects are randomly placed inside the mechanical limit of the robot and satisfy the image size of vision. By receiving the information from vision, the robot can successfully pick and place the object. Robot operating system (ROS) is used to develop a software system under Linux environment in this study. The NVIDIA Jetson TX2, the robot, the industrial camera and the gripper are integrated by ROS distributed architecture and peer-to-peer network, and all information and data collected can be transferred to them as well. Therefore, the collaborative design is used to realize the integrated software and hardware. The most important point is that we use machine vision to detect, identify the target objects and calculate the relative position between each object and the robot arm. We complete the following three steps through the monocular vision from the industrial camera which is mounted on the end of the robot. First, we use deep learning technology to detect and identify objects. Second, we improve the bounding box of deep learning technology result by using Image process technology. Third, we calculate the relative position between the object and the camera by the pinhole camera model. With regard to the robot application, we complete the following tasks. First, the relative position and angle between two different frames are calculated by using forward kinematics. Second, a specific point is set and every joint angle is calculated through inverse kinematics when the robot tool center arrives at the point. Third, we set a virtual environment to prevent collision happening during robot movement. Fourth, we set the joint angle constraints to avoid a major shift at the end of the robot. Fifth, using path constraint to prevent collision between the robot and the target object. Sixth, a series of middle points between the initial point and the target point can be found by using trajectory planning. After the above tasks completed, the robot can implement a randomly pick-and-place application through inverse kinematics under the limitation of vision range and the constraints of the mechanism.
author2 Wen-June Wang
author_facet Wen-June Wang
Hun-Yen Chang
張華延
author Hun-Yen Chang
張華延
spellingShingle Hun-Yen Chang
張華延
none
author_sort Hun-Yen Chang
title none
title_short none
title_full none
title_fullStr none
title_full_unstemmed none
title_sort none
publishDate 2019
url http://ndltd.ncl.edu.tw/handle/h9bzew
work_keys_str_mv AT hunyenchang none
AT zhānghuáyán none
AT hunyenchang jīyúshēndùxuéxíyǔyǐngxiàngchùlǐjìshùzhīdānyǎnshìjuéliùzhóujīxièshǒubìkòngzhì
AT zhānghuáyán jīyúshēndùxuéxíyǔyǐngxiàngchùlǐjìshùzhīdānyǎnshìjuéliùzhóujīxièshǒubìkòngzhì
_version_ 1719273947081474048