Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification
碩士 === 國立東華大學 === 應用數學系 === 104 === This essay work explores learning deep neural networks based on the Levenberg-Marquardt (LM) algorithm with applications to key speaker verification. The incremental construction process of deep multilayer perceptrons is based on effective LM learning of receptive...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/23436176079310280180 |
id |
ndltd-TW-104NDHU5507011 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-104NDHU55070112017-09-03T04:25:32Z http://ndltd.ncl.edu.tw/handle/23436176079310280180 Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification 以深度及監督式Levenberg-Marquardt學習法解語者辨識問題 Bo-Xiang Chiou 邱柏翔 碩士 國立東華大學 應用數學系 104 This essay work explores learning deep neural networks based on the Levenberg-Marquardt (LM) algorithm with applications to key speaker verification. The incremental construction process of deep multilayer perceptrons is based on effective LM learning of receptive fields of perceptrons of the same hidden layer. Predictors are acoustic features of mel-scale frequency cepstral coefficients (MFCCs). Desired targets represent two-alternative acoustic sources of predictors, either from the key speaker or from remaining reference speakers. Subject to paired predictors and targets, learning the first hidden layer aims to translate predictors to internal representations that are adaline (adaptive linear element) separable. A deeper hidden layer is allocated when unsolvable constraints are discovered during execution phase and are recruited for deep learning. Deep learning reserves receptive fields of previous layers, only training receptive fields of the topest perceptrons, significantly reducing size of adaptable interconnections optimized by the LM algorithm. The topest perceptrons can also serve as a conjunction of integrating two different neural networks. Each time, newly allocated perceptrons are connected to the topest layer and their receptive fields are optimized by the LM algorithm subject to both novel and inherited constraints. The incremental construction process is extensively applied for text-dependent key speaker verification. Jiann-Ming Wu 吳建銘 2016 學位論文 ; thesis 27 |
collection |
NDLTD |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立東華大學 === 應用數學系 === 104 === This essay work explores learning deep neural networks based on the Levenberg-Marquardt (LM) algorithm with applications to key speaker verification. The incremental construction process of deep multilayer perceptrons is based on effective LM learning of receptive fields of perceptrons of the same hidden layer. Predictors are acoustic features of mel-scale frequency cepstral coefficients (MFCCs). Desired targets represent two-alternative acoustic sources of predictors, either from the key speaker or from remaining reference speakers. Subject to paired predictors and targets, learning the first hidden layer aims to translate predictors to internal representations that are adaline (adaptive linear element) separable. A deeper hidden layer is allocated when unsolvable constraints are discovered during execution phase and are recruited for deep learning. Deep learning reserves receptive fields of previous layers, only training receptive fields of the topest perceptrons, significantly reducing size of adaptable interconnections optimized by the LM algorithm. The topest perceptrons can also serve as a conjunction of integrating two different neural networks. Each time, newly allocated perceptrons are connected to the topest layer and their receptive fields are optimized by the LM algorithm subject to both novel and inherited constraints. The incremental construction process is extensively applied for text-dependent key speaker verification.
|
author2 |
Jiann-Ming Wu |
author_facet |
Jiann-Ming Wu Bo-Xiang Chiou 邱柏翔 |
author |
Bo-Xiang Chiou 邱柏翔 |
spellingShingle |
Bo-Xiang Chiou 邱柏翔 Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
author_sort |
Bo-Xiang Chiou |
title |
Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
title_short |
Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
title_full |
Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
title_fullStr |
Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
title_full_unstemmed |
Deep perceptrons and supervised Levenberg-Marquardt learning with applications to key speaker verification |
title_sort |
deep perceptrons and supervised levenberg-marquardt learning with applications to key speaker verification |
publishDate |
2016 |
url |
http://ndltd.ncl.edu.tw/handle/23436176079310280180 |
work_keys_str_mv |
AT boxiangchiou deepperceptronsandsupervisedlevenbergmarquardtlearningwithapplicationstokeyspeakerverification AT qiūbǎixiáng deepperceptronsandsupervisedlevenbergmarquardtlearningwithapplicationstokeyspeakerverification AT boxiangchiou yǐshēndùjíjiāndūshìlevenbergmarquardtxuéxífǎjiěyǔzhěbiànshíwèntí AT qiūbǎixiáng yǐshēndùjíjiāndūshìlevenbergmarquardtxuéxífǎjiěyǔzhěbiànshíwèntí |
_version_ |
1718526286419197952 |