Speech Driven Lip Animation
碩士 === 國立東華大學 === 資訊工程學系 === 98 === Recently, speech driven lip animation has exported a growing domain of application, such as the virtual reporters, system of learning language etc. This thesis proposes a low-cost speech driven lip animation system. When operating the system, a user input voice vi...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2010
|
Online Access: | http://ndltd.ncl.edu.tw/handle/15860475863974329712 |
id |
ndltd-TW-098NDHU5392066 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-098NDHU53920662016-04-22T04:23:11Z http://ndltd.ncl.edu.tw/handle/15860475863974329712 Speech Driven Lip Animation 語音驅動人臉唇型動畫合成 Ting Cheng 鄭婷 碩士 國立東華大學 資訊工程學系 98 Recently, speech driven lip animation has exported a growing domain of application, such as the virtual reporters, system of learning language etc. This thesis proposes a low-cost speech driven lip animation system. When operating the system, a user input voice via a low-cost PC microphone or text. Afterwards, the system can recognize and synthesize the corresponding lips on the images. The kernel technique exploited in the system is hidden markov model﹙HMM﹚which is an effective solution for speech recognition. After recognizing each voice with HMM, we can obtain the lip-codes of words. From lip-codes, we can synthesize those interpolated images, and then finished the animation. The experimental results show that proposed method indeed can synthesize lip animation. The synthesized animations are very vivid and real, demonstrating its high potential in practical applications. Cheng-Chin Chiang 江政欽 2010/07/ 學位論文 ; thesis 77 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立東華大學 === 資訊工程學系 === 98 === Recently, speech driven lip animation has exported a growing domain of application, such as the virtual reporters, system of learning language etc. This thesis proposes a low-cost speech driven lip animation system. When operating the system, a user input voice via a low-cost PC microphone or text. Afterwards, the system can recognize and synthesize the corresponding lips on the images.
The kernel technique exploited in the system is hidden markov model﹙HMM﹚which is an effective solution for speech recognition. After recognizing each voice with HMM, we can obtain the lip-codes of words. From lip-codes, we can synthesize those interpolated images, and then finished the animation.
The experimental results show that proposed method indeed can synthesize lip animation. The synthesized animations are very vivid and real, demonstrating its high potential in practical applications.
|
author2 |
Cheng-Chin Chiang |
author_facet |
Cheng-Chin Chiang Ting Cheng 鄭婷 |
author |
Ting Cheng 鄭婷 |
spellingShingle |
Ting Cheng 鄭婷 Speech Driven Lip Animation |
author_sort |
Ting Cheng |
title |
Speech Driven Lip Animation |
title_short |
Speech Driven Lip Animation |
title_full |
Speech Driven Lip Animation |
title_fullStr |
Speech Driven Lip Animation |
title_full_unstemmed |
Speech Driven Lip Animation |
title_sort |
speech driven lip animation |
publishDate |
2010 |
url |
http://ndltd.ncl.edu.tw/handle/15860475863974329712 |
work_keys_str_mv |
AT tingcheng speechdrivenlipanimation AT zhèngtíng speechdrivenlipanimation AT tingcheng yǔyīnqūdòngrénliǎnchúnxíngdònghuàhéchéng AT zhèngtíng yǔyīnqūdòngrénliǎnchúnxíngdònghuàhéchéng |
_version_ |
1718230372479664128 |