Multi-View Temporal Ensemble for Classification of Non-Stationary Signals

In the classification of non-stationary time series data such as sounds, it is often tedious and expensive to get a training set that is representative of the target concept. To alleviate this problem, the proposed method treats the outputs of a number of deep learning sub-models as the views of the...

Full description

Bibliographic Details
Main Authors: B. H. D. Koh, Wai Lok Woo
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8662555/
id doaj-eefd325ead52414d9844a030c543cd09
record_format Article
spelling doaj-eefd325ead52414d9844a030c543cd092021-03-29T22:49:57ZengIEEEIEEE Access2169-35362019-01-017324823249110.1109/ACCESS.2019.29035718662555Multi-View Temporal Ensemble for Classification of Non-Stationary SignalsB. H. D. Koh0https://orcid.org/0000-0001-8937-5359Wai Lok Woo1School of Electrical and Electronic Engineering, Newcastle University, Newcastle upon Tyne, U.K.School of Electrical and Electronic Engineering, Newcastle University, Newcastle upon Tyne, U.K.In the classification of non-stationary time series data such as sounds, it is often tedious and expensive to get a training set that is representative of the target concept. To alleviate this problem, the proposed method treats the outputs of a number of deep learning sub-models as the views of the same target concept that can be linearly combined according to their complementarity. It is proposed that the view's complementarity be the contribution of the view to the global view, chosen in this paper to be the Laplacian eigenmap of the combined data. Complementarity is computed by alternate optimization, a process that involves the cost function of the Laplacian eigenmap and the weights of the linear combination. By blending the views in this way, a more complete view of the underlying phenomenon can be made available to the final classifier. Better generalization is obtained, as the consensus between the views reduces the variance while the increase in the discriminatory information reduces the bias. The data experiment with artificial views of environment sounds formed by deep learning structures of different configurations shows that the proposed method can improve the classification performance.https://ieeexplore.ieee.org/document/8662555/Deep learningdata fusiontime series classification
collection DOAJ
language English
format Article
sources DOAJ
author B. H. D. Koh
Wai Lok Woo
spellingShingle B. H. D. Koh
Wai Lok Woo
Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
IEEE Access
Deep learning
data fusion
time series classification
author_facet B. H. D. Koh
Wai Lok Woo
author_sort B. H. D. Koh
title Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
title_short Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
title_full Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
title_fullStr Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
title_full_unstemmed Multi-View Temporal Ensemble for Classification of Non-Stationary Signals
title_sort multi-view temporal ensemble for classification of non-stationary signals
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2019-01-01
description In the classification of non-stationary time series data such as sounds, it is often tedious and expensive to get a training set that is representative of the target concept. To alleviate this problem, the proposed method treats the outputs of a number of deep learning sub-models as the views of the same target concept that can be linearly combined according to their complementarity. It is proposed that the view's complementarity be the contribution of the view to the global view, chosen in this paper to be the Laplacian eigenmap of the combined data. Complementarity is computed by alternate optimization, a process that involves the cost function of the Laplacian eigenmap and the weights of the linear combination. By blending the views in this way, a more complete view of the underlying phenomenon can be made available to the final classifier. Better generalization is obtained, as the consensus between the views reduces the variance while the increase in the discriminatory information reduces the bias. The data experiment with artificial views of environment sounds formed by deep learning structures of different configurations shows that the proposed method can improve the classification performance.
topic Deep learning
data fusion
time series classification
url https://ieeexplore.ieee.org/document/8662555/
work_keys_str_mv AT bhdkoh multiviewtemporalensembleforclassificationofnonstationarysignals
AT wailokwoo multiviewtemporalensembleforclassificationofnonstationarysignals
_version_ 1724190862566490112