3D-ReConstnet: A Single-View 3D-Object Point Cloud Reconstruction Network

Object 3D reconstruction from a single-view image is an ill-posed problem. Inferring the self-occluded part of an object makes 3D reconstruction a challenging and ambiguous task. In this paper, we propose a novel neural network for generating a 3D-object point cloud model from a single-view image. T...

Full description

Bibliographic Details
Main Authors: Bin Li, Yonghan Zhang, Bo Zhao, Hongyao Shao
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9086481/
Description
Summary:Object 3D reconstruction from a single-view image is an ill-posed problem. Inferring the self-occluded part of an object makes 3D reconstruction a challenging and ambiguous task. In this paper, we propose a novel neural network for generating a 3D-object point cloud model from a single-view image. The proposed network named 3D-ReConstnet, an end to end reconstruction network. The 3D-ReConstnet uses the residual network to extract the features of a 2D input image and gets a feature vector. To deal with the uncertainty of the self-occluded part of an object, the 3D-ReConstnet uses the Gaussian probability distribution learned from the feature vector to predict the point cloud. The 3D-ReConstnet can generate the determined 3D output for a 2D image with sufficient information, and 3D-ReConstnet can also generate semantically different 3D reconstructions for the self-occluded or ambiguous part of an object. We evaluated the proposed 3D-ReConstnet on ShapeNet and Pix3D dataset, and obtained satisfactory improved results.
ISSN:2169-3536