Wearable Depth Camera: Monocular Depth Estimation via Sparse Optimization Under Weak Supervision

Depth estimation is essential for many human-object interaction tasks. Despite its advantages, traditional depth sensors, including Kinect or depth camera, are always not wearable-friendly due to several critical drawbacks, such as over-size or over-weight. Monocular camera, on the other hand, provi...

Full description

Bibliographic Details
Main Authors: Li He, Chuangbin Chen, Tao Zhang, Haife Zhu, Shaohua Wan
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8413080/
Description
Summary:Depth estimation is essential for many human-object interaction tasks. Despite its advantages, traditional depth sensors, including Kinect or depth camera, are always not wearable-friendly due to several critical drawbacks, such as over-size or over-weight. Monocular camera, on the other hand, provides a promising solution with limited burden to users and attracts more and more attentions in the literature. In this paper, we propose a depth estimation method with monocular camera. Our main idea lies in the weak-supervised learning model of monocular depth estimation based on left and right consistency. To learn an accurate depth estimation, on our training step, we employ LiDAR data, which are generated by laser radar with very high depth accuracy, to semi-supervise the learning scheme. We train our network on ResNet and propose a new penalty function, which takes into account the LiDAR depth loss in training. Compared with several state-of-the-art monocular camera depth estimators, our proposed method obtains the highest depth accuracy.
ISSN:2169-3536