Activity recognition in manufacturing: the roles of motion capture and sEMG+inertial wearables in detecting fine vs gross motion

In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the s...

Full description

Bibliographic Details
Main Authors: Kubota, Alyssa (Author), Iqbal, Tariq (Author), Shah, Julie A (Author), Riek, Laurel D. (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: IEEE, 2020-06-19T18:25:06Z.
Subjects:
Online Access:Get fulltext
Description
Summary:In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications.
National Science Foundation (grant nos. IIS-1724982 and IIS-1734482)