Transferable Speech-Driven Lips Synthesis

碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 94 === Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under v...

Full description

Bibliographic Details
Main Authors: Hong-Dien Chen, 陳宏典
Other Authors: 莊永裕
Format: Others
Language:en_US
Published: 2006
Online Access:http://ndltd.ncl.edu.tw/handle/26572252526368269372
Description
Summary:碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 94 === Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under very low bit-rate. However, it comes at the cost of the collection of a large video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. Hence, we adopt a simply method which allows us to transfer original animation model to a novel person only with a few different lip images.