Statistical Approaches for 2D Character Animation

博士 === 國立交通大學 === 資訊科學與工程研究所 === 98 === Traditionally, the production of 2D animation is a labor-intensive artisan process of building up sequences of drawn images by hand which, when shown one after the other one at a fixed rate, resemble a movement. Most work and hence time is spent on drawing, in...

Full description

Bibliographic Details
Main Authors: Chou, Yun-Feng, 周芸鋒
Other Authors: Shih, Zen-Chung
Format: Others
Language:en_US
Published: 2010
Online Access:http://ndltd.ncl.edu.tw/handle/19328675079997665600
Description
Summary:博士 === 國立交通大學 === 資訊科學與工程研究所 === 98 === Traditionally, the production of 2D animation is a labor-intensive artisan process of building up sequences of drawn images by hand which, when shown one after the other one at a fixed rate, resemble a movement. Most work and hence time is spent on drawing, inking, and coloring the individual animated characters for each of the frames. Instead of the traditional animation generated by hand, we introduce a novel method by enhancing still pictures and making characters move in convincing ways. The proposed method is based on the statistical analysis and inference, while minimizing users’ intervention. We adopt nonparametric regression to efficiently analyze the displacements of the pre-sampled data from characters in still pictures and use it to generate 2D character animation directly. Furthermore, 2D character animation is regarded as 3D transformation problem, which consists of a 2D spatial displacement and a 1D shift in time. Hence, we focus on the temporal relationship of different poses of the same character in these still pictures. Time series is applied to analyze the character’s movement and forecast a sequence of the suitable limbs movement of the character. In this dissertation, 2D character animation involves novel view generation, expressive talking face simulation, and limbs movement synthesis. Considering characters in still pictures, we focus on nonparametric regression to generate a novel view and an expressive facial animation synchronized with the input speech of a character. Kernel regression with elliptic radial basis functions (ERBFs) is proposed to describe and deform the shape of the character in image space. Note that the novel parametric representation, ERBFs, can be applied to represent the observations of the shape on the unit ellipse. For preserving patterns within the deformed shape, locally weighted regression (LOESS) is applied to fit the details with local control. Furthermore, time series is used to analyze the limb movement of a character and represent the motion trajectory. Note that a character’s motion could be described by a series of non-continuous poses of a character from a sequence of contiguous frames. According to these poses, we investigate a nonparametric Bayesian approach to construct the time series model representing the character’s motion trajectory. Then we can synthesize a sequence of the motion by using the motion trajectory. Last but not the least, we also investigate how to adopt the proposed statistical approaches mentioned above to animate passive elements. The movements of passive elements involving natural movements that respond to natural forces in some fashion like trees swaying and water rippling could be synthesized. Given a picture of a tree, we make it sway. Given a picture of a pond, we make it ripple. The solutions are developed to animate photographs or paintings effectively. Experimental results show that our method effectively simulates plausible movements for 2D character animation. They also show that the estimated motion trajectory best matches the given still frames. In comparison to previous approaches, our proposed method synthesizes smooth animations, while minimizing unnatural distortion and having the advantages of being more controllable. Moreover, the proposed method is especially suitable for intelligent multimedia applications in virtual human generation. We believe that the provided solutions are easy to use, and empower a much quicker animation production.