Automatic Multi-View Action Recognition with Robust Features
碩士 === 國立交通大學 === 電控工程研究所 === 101 === In this paper, we propose a practical, reliable and generic system for video-based human action recognition. For description of different actions performed in different view, we use our view-invariant features to address multi-view action recognition. These feat...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2013
|
Online Access: | http://ndltd.ncl.edu.tw/handle/20064247425571252919 |
id |
ndltd-TW-101NCTU5449121 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-101NCTU54491212016-07-02T04:20:30Z http://ndltd.ncl.edu.tw/handle/20064247425571252919 Automatic Multi-View Action Recognition with Robust Features 強健特徵之自動化多視角動作辨識 Lin, Yu-Feng 林峪鋒 碩士 國立交通大學 電控工程研究所 101 In this paper, we propose a practical, reliable and generic system for video-based human action recognition. For description of different actions performed in different view, we use our view-invariant features to address multi-view action recognition. These features are obtained by extracting holistic features from different temporal scales clouds which is modeled explicitly global spatial and temporal distribution of interest points alone. Using our view-invariant features is highly discriminative and more robust for recognizing actions under view change. For practical application, we propose a mechanism that it can watch actions a person doing at image sequences and separate these actions according to training data. Besides, using our scheme, we can label the beginning and end of the action sequence automatically without manually setting. Experiments using the KTH and WEIZMANN and MuHAVi datasets demonstrate that our approach outperforms most existing methods. In addition, the experiments also show our system performed well in that training and testing are cross dataset. In the other words, our system does not need to retrain when scenarios was changed. The trained database is applicable for a variety of different environment. Lin, Chin-Teng 林進燈 2013 學位論文 ; thesis 53 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立交通大學 === 電控工程研究所 === 101 === In this paper, we propose a practical, reliable and generic system for video-based human action recognition. For description of different actions performed in different view, we use our view-invariant features to address multi-view action recognition. These features are obtained by extracting holistic features from different temporal scales clouds which is modeled explicitly global spatial and temporal distribution of interest points alone. Using our view-invariant features is highly discriminative and more robust for recognizing actions under view change. For practical application, we propose a mechanism that it can watch actions a person doing at image sequences and separate these actions according to training data. Besides, using our scheme, we can label the beginning and end of the action sequence automatically without manually setting. Experiments using the KTH and WEIZMANN and MuHAVi datasets demonstrate that our approach outperforms most existing methods. In addition, the experiments also show our system performed well in that training and testing are cross dataset. In the other words, our system does not need to retrain when scenarios was changed. The trained database is applicable for a variety of different environment.
|
author2 |
Lin, Chin-Teng |
author_facet |
Lin, Chin-Teng Lin, Yu-Feng 林峪鋒 |
author |
Lin, Yu-Feng 林峪鋒 |
spellingShingle |
Lin, Yu-Feng 林峪鋒 Automatic Multi-View Action Recognition with Robust Features |
author_sort |
Lin, Yu-Feng |
title |
Automatic Multi-View Action Recognition with Robust Features |
title_short |
Automatic Multi-View Action Recognition with Robust Features |
title_full |
Automatic Multi-View Action Recognition with Robust Features |
title_fullStr |
Automatic Multi-View Action Recognition with Robust Features |
title_full_unstemmed |
Automatic Multi-View Action Recognition with Robust Features |
title_sort |
automatic multi-view action recognition with robust features |
publishDate |
2013 |
url |
http://ndltd.ncl.edu.tw/handle/20064247425571252919 |
work_keys_str_mv |
AT linyufeng automaticmultiviewactionrecognitionwithrobustfeatures AT línyùfēng automaticmultiviewactionrecognitionwithrobustfeatures AT linyufeng qiángjiàntèzhēngzhīzìdònghuàduōshìjiǎodòngzuòbiànshí AT línyùfēng qiángjiàntèzhēngzhīzìdònghuàduōshìjiǎodòngzuòbiànshí |
_version_ |
1718331556426153984 |