Activity recognition in manufacturing: the roles of motion capture and sEMG+inertial wearables in detecting fine vs gross motion

In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the s...

Full description

Bibliographic Details
Main Authors: Kubota, Alyssa (Author), Iqbal, Tariq (Author), Shah, Julie A (Author), Riek, Laurel D. (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: IEEE, 2020-06-19T18:25:06Z.
Subjects:
Online Access:Get fulltext
LEADER 02352 am a22002173u 4500
001 125890
042 |a dc 
100 1 0 |a Kubota, Alyssa  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
700 1 0 |a Iqbal, Tariq  |e author 
700 1 0 |a Shah, Julie A  |e author 
700 1 0 |a Riek, Laurel D.  |e author 
245 0 0 |a Activity recognition in manufacturing: the roles of motion capture and sEMG+inertial wearables in detecting fine vs gross motion 
260 |b IEEE,   |c 2020-06-19T18:25:06Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/125890 
520 |a In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications. 
520 |a National Science Foundation (grant nos. IIS-1724982 and IIS-1734482) 
546 |a en 
655 7 |a Article 
773 |t 10.1109/ICRA.2019.8793954 
773 |t International Conference on Robotics and Automation (ICRA)