Sound Context Classification based on Joint Learning Model and Multi-Spectrogram Features

This article presents a deep learning framework applied for Acoustic Scene Classification (ASC), the task of classifying different environments from the sounds they produce. To successfully develop the framework, we firstly carry out a comprehensive analysis of spectrogram representation extracted f...

Full description

Bibliographic Details
Main Authors: Ly, T. (Author), Ngo, D. (Author), Ngo, T. (Author), Nguyen, A. (Author), Pham, K. (Author), Pham, L. (Author)
Format: Article
Language:English
Published: Research Institute of Intelligent Computer Systems 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02420nam a2200265Ia 4500
001 10.47839-ijc.21.2.2595
008 220718s2022 CNT 000 0 und d
020 |a 17276209 (ISSN) 
245 1 0 |a Sound Context Classification based on Joint Learning Model and Multi-Spectrogram Features 
260 0 |b Research Institute of Intelligent Computer Systems  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.47839/ijc.21.2.2595 
520 3 |a This article presents a deep learning framework applied for Acoustic Scene Classification (ASC), the task of classifying different environments from the sounds they produce. To successfully develop the framework, we firstly carry out a comprehensive analysis of spectrogram representation extracted from sound scene input, then propose the best multi-spectrogram combination for front-end feature extraction. In terms of back-end classification, we propose a novel joint learning model using a parallel architecture of Convolutional Neural Network (CNN) and Convolutional Recurrent Neural Network (C-RNN), which is able to learn efficiently both spatial features and temporal sequences of a spectrogram input. The experimental results have proved our proposed framework general and robust for ASC tasks by three main contributions. Firstly, the most effective spectrogram combination is indicated for specific datasets that none of publication previously analyzed. Secondly, our joint learning architecture of CNN and C-RNN achieves better performance compared with the CNN only which is proposed for the baseline in this paper. Finally, our framework achieves competitive performance compared with the state-of-the-art systems on various benchmark datasets of IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Task 1, 2017 Task 1, 2018 Task 1A & 1B, and LITIS Rouen. © 2022. International Journal of Computing.All Rights Reserved. 
650 0 4 |a Acoustic scene classification 
650 0 4 |a Convolutional neural network 
650 0 4 |a Feature extraction 
650 0 4 |a Joint learning architecture 
650 0 4 |a Recurrent neural network 
650 0 4 |a Spectrogram 
700 1 |a Ly, T.  |e author 
700 1 |a Ngo, D.  |e author 
700 1 |a Ngo, T.  |e author 
700 1 |a Nguyen, A.  |e author 
700 1 |a Pham, K.  |e author 
700 1 |a Pham, L.  |e author 
773 |t International Journal of Computing  |x 17276209 (ISSN)  |g 21 2, 258-270