Human Activity-Understanding: A Multilayer Approach Combining Body Movements and Contextual Descriptors Analysis
A deep understanding of human activity is key to successful human-robot interaction (HRI). The translation of sensed human behavioural signals/cues and context descriptors into an encoded human activity remains a challenge because of the complex nature of human actions. In this paper, we propose a m...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2015-07-01
|
Series: | International Journal of Advanced Robotic Systems |
Online Access: | https://doi.org/10.5772/60525 |
Summary: | A deep understanding of human activity is key to successful human-robot interaction (HRI). The translation of sensed human behavioural signals/cues and context descriptors into an encoded human activity remains a challenge because of the complex nature of human actions. In this paper, we propose a multilayer framework for the understanding of human activity to be implemented in a mobile robot. It consists of a perception layer which exploits a D-RGB-based skeleton tracking output used to simulate a physical model of virtual human dynamics in order to compensate for the inaccuracy and inconsistency of the raw data. A multi-support vector machine (MSVM) model trained with features describing the human motor coordination through temporal segments in combination with environment descriptors (object affordance) is used to recognize each sub-activity (classification layer). The interpretation of sequences of classified elementary actions is based on discrete hidden Markov models (DHMMs) (interpretation layer). The framework assessment was performed on the Cornell Activity Dataset (CAD-120) [ 1 ]. The performances of our method are comparable with those presented in [ 2 ] and clearly show the relevance of this model-based approach. |
---|---|
ISSN: | 1729-8814 |