none

碩士 === 國立中央大學 === 電機工程學系 === 107 === The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the o...

Full description

Bibliographic Details
Main Authors: Hun-Yen Chang, 張華延
Other Authors: Wen-June Wang
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/h9bzew
Description
Summary:碩士 === 國立中央大學 === 電機工程學系 === 107 === The main purpose of this paper is to control six degrees of freedom (6DOF) robot to achieve a pick-and-place application for five different objects. The relative position between the object and the robot is calculated via using vision to detect and identify the objects, in addition, the objects are randomly placed inside the mechanical limit of the robot and satisfy the image size of vision. By receiving the information from vision, the robot can successfully pick and place the object. Robot operating system (ROS) is used to develop a software system under Linux environment in this study. The NVIDIA Jetson TX2, the robot, the industrial camera and the gripper are integrated by ROS distributed architecture and peer-to-peer network, and all information and data collected can be transferred to them as well. Therefore, the collaborative design is used to realize the integrated software and hardware. The most important point is that we use machine vision to detect, identify the target objects and calculate the relative position between each object and the robot arm. We complete the following three steps through the monocular vision from the industrial camera which is mounted on the end of the robot. First, we use deep learning technology to detect and identify objects. Second, we improve the bounding box of deep learning technology result by using Image process technology. Third, we calculate the relative position between the object and the camera by the pinhole camera model. With regard to the robot application, we complete the following tasks. First, the relative position and angle between two different frames are calculated by using forward kinematics. Second, a specific point is set and every joint angle is calculated through inverse kinematics when the robot tool center arrives at the point. Third, we set a virtual environment to prevent collision happening during robot movement. Fourth, we set the joint angle constraints to avoid a major shift at the end of the robot. Fifth, using path constraint to prevent collision between the robot and the target object. Sixth, a series of middle points between the initial point and the target point can be found by using trajectory planning. After the above tasks completed, the robot can implement a randomly pick-and-place application through inverse kinematics under the limitation of vision range and the constraints of the mechanism.