On Quaternions and Activity Classification Across Sensor Domains
Activity classification based on sensor data is a challenging task. Many studies have focused on two main methods to enable activity classification; namely sensor level classification and body-model level classification. This study aims to enable activity classification across sensor domains by cons...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Published: |
Virginia Tech
2015
|
Subjects: | |
Online Access: | http://hdl.handle.net/10919/51196 |
Summary: | Activity classification based on sensor data is a challenging task. Many studies have focused on two main
methods to enable activity classification; namely sensor level classification and body-model level classification.
This study aims to enable activity classification across sensor domains by considering an e-textile garment
and provide the groundwork for transferring the e-textile garment to a vision-based classifier. The framework
is comprised of three main components that enable the successful transfer of the body-worn system to the
vision-based classifier. The inter-class confusion of the activity space is quantified to allow an ideal prediction
of known class accuracy for varying levels of error within the system. Methods for quantifying sensor and
garment level error are undertaken to identify challenges specific to a body-worn system. These methods
are then used to inform decisions related to the classification accuracy and threshold of the classifier. Using
activities from a vision-based system known to the classifier, a user study was conducted to generate an
observed set of activities from the body-worn system. The results indicate that the vision-based classifier
used is user-independent and can successfully handle classification across sensor domains. === Master of Science |
---|