Personalized Music Emotion Recognition using Electroencephalography (EEG)
碩士 === 輔仁大學 === 資訊工程學系碩士班 === 101 === Emotion recognition of music objects is one of the promising and demanding research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a best practice of training/classification problem. However, even if...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2014
|
Online Access: | http://ndltd.ncl.edu.tw/handle/48135319400070992035 |
id |
ndltd-TW-102FJU00396041 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-102FJU003960412016-05-22T04:34:30Z http://ndltd.ncl.edu.tw/handle/48135319400070992035 Personalized Music Emotion Recognition using Electroencephalography (EEG) 以腦波為佐證的音樂情緒分析 Yan-Lin Zhen 曾硯琳 碩士 輔仁大學 資訊工程學系碩士班 101 Emotion recognition of music objects is one of the promising and demanding research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a best practice of training/classification problem. However, even if we have a benchmark (a training data with ground truth) and employ effective classification algorithms, the music emotion recognition remains a challenging problem. Based on our literature review, most of previous work only focuses on music acoustic content without considering the individual difference (i.e., personalization issue). In addition, the assessment of emotions are usually self-reported. Such kind of self-reported assessment (e.g., emotion tags) might be inaccurate, and even inconsistent. Meanwhile, up to date research of emotion recognition suggests that emotion activation is related to and associated with the analysis of voice, gestures, facial muscle movements, physiological signals originating from peripheral nervous system such as heart rate and galvanic skin response, and brain activities. The electroencephalography (EEG) is a non-invasive brain-machine interface, which utilizes neurophysiological signals from the brain to external machines without surgery. The less-intrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions. In this paper, we would like to propose an evidence-based and personalized model for music emotion recognition. In the model construction and personalized adaption of training phase, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models ANN1 (``EEG recordings of experts vs. emotions'') and ANN2 (``music audio content vs. emotion''). Both models are trained by artificial neural network. Then, with respect to a subject A, we collect his/her EEG recordings when listening the selected IADS samples, and apply the ANN1 to determine the emotion vector of subject A. With having the generic model and the corresponding individual difference, we construct the personalized model $H$ by the projective transformation accordingly. In the testing phase, given a music object, the processing steps are: (1) to extract features from music audio content, (2) to apply ANN2 to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix $H$ to determine the personalized emotion vector. To show the effectiveness of our approach, we also perform experiments and obtain promising results. Jia-Lien Hsu 徐嘉連 2014 學位論文 ; thesis 30 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 輔仁大學 === 資訊工程學系碩士班 === 101 === Emotion recognition of music objects is one of the promising and demanding research issues in the field of music information retrieval.
Usually, music emotion recognition could be considered as a best practice of training/classification problem.
However, even if we have a benchmark (a training data with ground truth) and employ effective classification algorithms, the music emotion recognition remains a challenging problem.
Based on our literature review, most of previous work only focuses on music acoustic content without considering the individual difference (i.e., personalization issue). In addition, the assessment of emotions are usually self-reported. Such kind of self-reported assessment (e.g., emotion tags) might be inaccurate, and even inconsistent.
Meanwhile, up to date research of emotion recognition suggests that emotion activation is related to and associated with the analysis of voice, gestures, facial muscle movements, physiological signals originating from peripheral nervous system such as heart rate and galvanic skin response, and brain activities.
The electroencephalography (EEG) is a non-invasive brain-machine interface, which utilizes neurophysiological signals from the brain to external machines without surgery. The less-intrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions.
In this paper, we would like to propose an evidence-based and personalized model for music emotion recognition.
In the model construction and personalized adaption of training phase, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models ANN1 (``EEG recordings of experts vs. emotions'') and ANN2 (``music audio content vs. emotion''). Both models are trained by artificial neural network.
Then, with respect to a subject A, we collect his/her EEG recordings when listening the selected IADS samples, and apply the ANN1 to determine the emotion vector of subject A. With having the generic model and the corresponding individual difference, we construct the personalized model $H$ by the projective transformation accordingly.
In the testing phase, given a music object, the processing steps are: (1) to extract features from music audio content, (2) to apply ANN2 to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix $H$ to determine the personalized emotion vector.
To show the effectiveness of our approach, we also perform experiments and obtain promising results.
|
author2 |
Jia-Lien Hsu |
author_facet |
Jia-Lien Hsu Yan-Lin Zhen 曾硯琳 |
author |
Yan-Lin Zhen 曾硯琳 |
spellingShingle |
Yan-Lin Zhen 曾硯琳 Personalized Music Emotion Recognition using Electroencephalography (EEG) |
author_sort |
Yan-Lin Zhen |
title |
Personalized Music Emotion Recognition using Electroencephalography (EEG) |
title_short |
Personalized Music Emotion Recognition using Electroencephalography (EEG) |
title_full |
Personalized Music Emotion Recognition using Electroencephalography (EEG) |
title_fullStr |
Personalized Music Emotion Recognition using Electroencephalography (EEG) |
title_full_unstemmed |
Personalized Music Emotion Recognition using Electroencephalography (EEG) |
title_sort |
personalized music emotion recognition using electroencephalography (eeg) |
publishDate |
2014 |
url |
http://ndltd.ncl.edu.tw/handle/48135319400070992035 |
work_keys_str_mv |
AT yanlinzhen personalizedmusicemotionrecognitionusingelectroencephalographyeeg AT céngyànlín personalizedmusicemotionrecognitionusingelectroencephalographyeeg AT yanlinzhen yǐnǎobōwèizuǒzhèngdeyīnlèqíngxùfēnxī AT céngyànlín yǐnǎobōwèizuǒzhèngdeyīnlèqíngxùfēnxī |
_version_ |
1718275417600688128 |