Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model

New types of artificial intelligence products are gradually transferring to voice interaction modes with the demand for intelligent products expanding from communication to recognizing users' emotions and instantaneous feedback. At present, affective acoustic models are constructed through deep...

Full description

Bibliographic Details
Main Authors: Kuo-Liang Huang, Sheng-Feng Duan, Xi Lyu
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-05-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpsyg.2021.664925/full
id doaj-744b2edd3cc84b42ae0721aba410183e
record_format Article
spelling doaj-744b2edd3cc84b42ae0721aba410183e2021-05-04T05:46:05ZengFrontiers Media S.A.Frontiers in Psychology1664-10782021-05-011210.3389/fpsyg.2021.664925664925Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD ModelKuo-Liang Huang0Sheng-Feng Duan1Xi Lyu2Department of Industrial Design, Design Academy, Sichuan Fine Arts Institute, Chongqing, ChinaDepartment of Industrial Design, Design Academy, Sichuan Fine Arts Institute, Chongqing, ChinaDepartment of Digital Media Art, Design Academy, Sichuan Fine Arts Institute, Chongqing, ChinaNew types of artificial intelligence products are gradually transferring to voice interaction modes with the demand for intelligent products expanding from communication to recognizing users' emotions and instantaneous feedback. At present, affective acoustic models are constructed through deep learning and abstracted into a mathematical model, making computers learn from data and equipping them with prediction abilities. Although this method can result in accurate predictions, it has a limitation in that it lacks explanatory capability; there is an urgent need for an empirical study of the connection between acoustic features and psychology as the theoretical basis for the adjustment of model parameters. Accordingly, this study focuses on exploring the differences between seven major “acoustic features” and their physical characteristics during voice interaction with the recognition and expression of “gender” and “emotional states of the pleasure-arousal-dominance (PAD) model.” In this study, 31 females and 31 males aged between 21 and 60 were invited using the stratified random sampling method for the audio recording of different emotions. Subsequently, parameter values of acoustic features were extracted using Praat voice software. Finally, parameter values were analyzed using a Two-way ANOVA, mixed-design analysis in SPSS software. Results show that gender and emotional states of the PAD model vary among seven major acoustic features. Moreover, their difference values and rankings also vary. The research conclusions lay a theoretical foundation for AI emotional voice interaction and solve deep learning's current dilemma in emotional recognition and parameter optimization of the emotional synthesis model due to the lack of explanatory power.https://www.frontiersin.org/articles/10.3389/fpsyg.2021.664925/fullvoice-user interface (VUI)affective computingacoustic featuresemotion analysisPAD model
collection DOAJ
language English
format Article
sources DOAJ
author Kuo-Liang Huang
Sheng-Feng Duan
Xi Lyu
spellingShingle Kuo-Liang Huang
Sheng-Feng Duan
Xi Lyu
Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
Frontiers in Psychology
voice-user interface (VUI)
affective computing
acoustic features
emotion analysis
PAD model
author_facet Kuo-Liang Huang
Sheng-Feng Duan
Xi Lyu
author_sort Kuo-Liang Huang
title Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
title_short Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
title_full Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
title_fullStr Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
title_full_unstemmed Affective Voice Interaction and Artificial Intelligence: A Research Study on the Acoustic Features of Gender and the Emotional States of the PAD Model
title_sort affective voice interaction and artificial intelligence: a research study on the acoustic features of gender and the emotional states of the pad model
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2021-05-01
description New types of artificial intelligence products are gradually transferring to voice interaction modes with the demand for intelligent products expanding from communication to recognizing users' emotions and instantaneous feedback. At present, affective acoustic models are constructed through deep learning and abstracted into a mathematical model, making computers learn from data and equipping them with prediction abilities. Although this method can result in accurate predictions, it has a limitation in that it lacks explanatory capability; there is an urgent need for an empirical study of the connection between acoustic features and psychology as the theoretical basis for the adjustment of model parameters. Accordingly, this study focuses on exploring the differences between seven major “acoustic features” and their physical characteristics during voice interaction with the recognition and expression of “gender” and “emotional states of the pleasure-arousal-dominance (PAD) model.” In this study, 31 females and 31 males aged between 21 and 60 were invited using the stratified random sampling method for the audio recording of different emotions. Subsequently, parameter values of acoustic features were extracted using Praat voice software. Finally, parameter values were analyzed using a Two-way ANOVA, mixed-design analysis in SPSS software. Results show that gender and emotional states of the PAD model vary among seven major acoustic features. Moreover, their difference values and rankings also vary. The research conclusions lay a theoretical foundation for AI emotional voice interaction and solve deep learning's current dilemma in emotional recognition and parameter optimization of the emotional synthesis model due to the lack of explanatory power.
topic voice-user interface (VUI)
affective computing
acoustic features
emotion analysis
PAD model
url https://www.frontiersin.org/articles/10.3389/fpsyg.2021.664925/full
work_keys_str_mv AT kuolianghuang affectivevoiceinteractionandartificialintelligencearesearchstudyontheacousticfeaturesofgenderandtheemotionalstatesofthepadmodel
AT shengfengduan affectivevoiceinteractionandartificialintelligencearesearchstudyontheacousticfeaturesofgenderandtheemotionalstatesofthepadmodel
AT xilyu affectivevoiceinteractionandartificialintelligencearesearchstudyontheacousticfeaturesofgenderandtheemotionalstatesofthepadmodel
_version_ 1721481839522086912