Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows
In this study, we consider fully automated action recognition based on deep learning in the industrial environment. In contrast to most existing methods, which rely on professional knowledge to construct complex hand-crafted features, or only use basic deep-learning methods, such as convolutional ne...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-02-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/3/966 |
id |
doaj-4f9d20658aec417fa9dc37661e9cfdb2 |
---|---|
record_format |
Article |
spelling |
doaj-4f9d20658aec417fa9dc37661e9cfdb22020-11-25T01:45:51ZengMDPI AGApplied Sciences2076-34172020-02-0110396610.3390/app10030966app10030966Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial WorkflowsZeyu Jiao0Guozhu Jia1Yingjie Cai2School of Economics and Management, Beihang University, Beijing 100191, ChinaSchool of Economics and Management, Beihang University, Beijing 100191, ChinaDepartment of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong 999077, ChinaIn this study, we consider fully automated action recognition based on deep learning in the industrial environment. In contrast to most existing methods, which rely on professional knowledge to construct complex hand-crafted features, or only use basic deep-learning methods, such as convolutional neural networks (CNNs), to extract information from images in the production process, we exploit a novel and effective method, which integrates multiple deep-learning networks including CNNs, spatial transformer networks (STNs), and graph convolutional networks (GCNs) to process video data in industrial workflows. The proposed method extracts both spatial and temporal information from video data. The spatial information is extracted by estimating the human pose of each frame, and the skeleton image of the human body in each frame is obtained. Furthermore, multi-frame skeleton images are processed by GCN to obtain temporal information, meaning the action recognition results are predicted automatically. By training on a large human action dataset, Kinetics, we apply the proposed method to the real-world industrial environment and achieve superior performance compared with the existing methods.https://www.mdpi.com/2076-3417/10/3/966deep learningaction recognitionconvolutional neural networkspatial transformer networkgraph convolutional networkindustrial workflows |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Zeyu Jiao Guozhu Jia Yingjie Cai |
spellingShingle |
Zeyu Jiao Guozhu Jia Yingjie Cai Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows Applied Sciences deep learning action recognition convolutional neural network spatial transformer network graph convolutional network industrial workflows |
author_facet |
Zeyu Jiao Guozhu Jia Yingjie Cai |
author_sort |
Zeyu Jiao |
title |
Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows |
title_short |
Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows |
title_full |
Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows |
title_fullStr |
Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows |
title_full_unstemmed |
Ensuring Computers Understand Manual Operations in Production: Deep-Learning-Based Action Recognition in Industrial Workflows |
title_sort |
ensuring computers understand manual operations in production: deep-learning-based action recognition in industrial workflows |
publisher |
MDPI AG |
series |
Applied Sciences |
issn |
2076-3417 |
publishDate |
2020-02-01 |
description |
In this study, we consider fully automated action recognition based on deep learning in the industrial environment. In contrast to most existing methods, which rely on professional knowledge to construct complex hand-crafted features, or only use basic deep-learning methods, such as convolutional neural networks (CNNs), to extract information from images in the production process, we exploit a novel and effective method, which integrates multiple deep-learning networks including CNNs, spatial transformer networks (STNs), and graph convolutional networks (GCNs) to process video data in industrial workflows. The proposed method extracts both spatial and temporal information from video data. The spatial information is extracted by estimating the human pose of each frame, and the skeleton image of the human body in each frame is obtained. Furthermore, multi-frame skeleton images are processed by GCN to obtain temporal information, meaning the action recognition results are predicted automatically. By training on a large human action dataset, Kinetics, we apply the proposed method to the real-world industrial environment and achieve superior performance compared with the existing methods. |
topic |
deep learning action recognition convolutional neural network spatial transformer network graph convolutional network industrial workflows |
url |
https://www.mdpi.com/2076-3417/10/3/966 |
work_keys_str_mv |
AT zeyujiao ensuringcomputersunderstandmanualoperationsinproductiondeeplearningbasedactionrecognitioninindustrialworkflows AT guozhujia ensuringcomputersunderstandmanualoperationsinproductiondeeplearningbasedactionrecognitioninindustrialworkflows AT yingjiecai ensuringcomputersunderstandmanualoperationsinproductiondeeplearningbasedactionrecognitioninindustrialworkflows |
_version_ |
1725022218884218880 |