Motion Estimation for Visual Coding and Signal Processing

博士 === 國立成功大學 === 電機工程學系碩博士班 === 101 === This dissertation presents novel motion estimation applicable to video coding and visual signal processing based on algorithm/architecture co-design methodology. Motion estimation employs a significant role in high quality and real-time video applications and...

Full description

Bibliographic Details
Main Authors: Ming-JiunWang, 王明俊
Other Authors: Gwo-Giun Lee
Format: Others
Language:en_US
Published: 2013
Online Access:http://ndltd.ncl.edu.tw/handle/98721918840460558648
Description
Summary:博士 === 國立成功大學 === 電機工程學系碩博士班 === 101 === This dissertation presents novel motion estimation applicable to video coding and visual signal processing based on algorithm/architecture co-design methodology. Motion estimation employs a significant role in high quality and real-time video applications and determines the cost of video system-on-a-chip. The motion estimation algorithm is characterized by efficient spatio-temporal motion vector prediction, modified one-at-a-time search, and multiple update paths as innovated by optimization theory. By analyzing the algorithmic complexity in the early design stage, the introduced motion estimation locates a desirable design instance in the co-design space with an effective trade-off between performance and complexity, resulting in an efficient architecture that features internal caches for reference data. The implementation results of the introduced motion estimation not only surpasses recently published research and achieves comparable performance to full search in H.264/AVC video coding, but also possesses ultralow complexity in terms of silicon area against other strategies. Applying the introduced motion estimation with true-motion characteristics to video processing algorithms also obtains outstanding designs. By tactically utilizing motion information for video content analysis, the introduced motion-adaptive and motion-compensated deinterlacing algorithms select an appropriate spectrum filter for a specific video scene, render better interpolation quality, and require less gate count compared with state-of-the-art. Moreover, the algorithm/architecture co-design methodology helps with exploring the design space, determining the processing granularity, and studying the commonality between different processing modes, resulting in a cost-efficient reconfigurable architecture. On the other hand, the introduced motion estimation facilitates the analysis of video scenes for 2D-to-3D video conversion. The signatures of co-occurrence matrix of motion vector classify video scenes having various motion-depth relations, which is important to depth estimation from the motion vectors of 2D video. As compared to other motion-based 2D-to-3D conversion algorithms, the introduced algorithm, which incorporates the introduced motion estimation, provides more reasonable depth for 3D view synthesis. Experimental results indicate that the motion estimation can be widely applied to video coding and visual signal processing, for achieving better qualitative and quantitative performance with lower cost.