Fast Monoscopic 2D Image Depth Estimation Method Based on Edge Defocus Cues

碩士 === 國立交通大學 === 生醫工程研究所 === 105 === This paper presents a method for depth map estimation from monoscopic video. To estimate depth from single image, we use defocus cue in our method. Defocus depth map estimation method uses lens blur to estimate depth map. If an object is not on the focal plane,...

Full description

Bibliographic Details
Main Authors: Hsu, Tig-Yao, 許庭耀
Other Authors: Liu, Chih-Wei
Format: Others
Language:zh-TW
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/gzh6y6
Description
Summary:碩士 === 國立交通大學 === 生醫工程研究所 === 105 === This paper presents a method for depth map estimation from monoscopic video. To estimate depth from single image, we use defocus cue in our method. Defocus depth map estimation method uses lens blur to estimate depth map. If an object is not on the focal plane, it will be blurred. A blurred image is smoother then original. It means we can calculate blurred level by spatial variety or energy in high frequency. Most of the defocus depth map estimation method use the total energy in high frequency to estimate the depth map. But the problem is that total energy in high frequency will effect by luminance and color in the image. It means that the estimation result will be effected by different luminance and color. Then we find out that high frequency energies will not disappear but move into low frequency part in a blurred image. Based on this theory, we proposed a method that estimating depth map from ratio of high and low frequency, and we optimize the computational complexity of this method. We use disparity depth map as ground truth depth map, and compare it to defocus depth map to calculate PSNR and SSIM. Experiment results show proposed method is more stable in different color and luminance. In high luminance case, proposed method can improve 30.97% at SSIM-Structure comparison. In low and medium luminance case, other method get negative SSIM-Structure value, and our method is still positive. Proposed method only need 65% less computational time in speed comparison.