Depth Map Refinement for Immersive Video

In this article, we propose a depth map refinement method that increases the quality of immersive video. The proposal highly enhances the inter-view consistency of depth maps (estimated or acquired by any method), crucial for achieving the required fidelity of the virtual view synthesis process. In...

Full description

Bibliographic Details
Main Authors: Dawid Mieloch, Adrian Dziembowski, Marek Domanski
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9319165/
id doaj-06c89e6fa76243bfa96d54c19d032683
record_format Article
spelling doaj-06c89e6fa76243bfa96d54c19d0326832021-03-30T15:03:18ZengIEEEIEEE Access2169-35362021-01-019107781078810.1109/ACCESS.2021.30505549319165Depth Map Refinement for Immersive VideoDawid Mieloch0https://orcid.org/0000-0003-0709-812XAdrian Dziembowski1Marek Domanski2https://orcid.org/0000-0002-9381-0293Institute of Multimedia Telecommunications, Poznañ University of Technology, Poznañ, PolandInstitute of Multimedia Telecommunications, Poznañ University of Technology, Poznañ, PolandInstitute of Multimedia Telecommunications, Poznañ University of Technology, Poznañ, PolandIn this article, we propose a depth map refinement method that increases the quality of immersive video. The proposal highly enhances the inter-view consistency of depth maps (estimated or acquired by any method), crucial for achieving the required fidelity of the virtual view synthesis process. In the described method, only information from depth maps is used, as the use of texture can introduce errors in the refinement, mostly due to inter-view color inconsistencies and noise. In order to evaluate the performance of the proposal and compare it with the state of the art, three experiments were conducted. To test the influence of the refinement on the encoding of immersive video, four sets of depth maps (original, refined with the synthesis-based refinement, a bilateral filter, and with the proposal) were encoded with the MPEG Immersive Video (MIV) encoder. In the second experiment, in order to provide a direct evaluation of the accuracy of depth maps, the Middlebury database comparison was performed. In the third experiment, the temporal consistency of depth maps was assessed by measuring the efficiency of encoding of the virtual views. The experiments showed both a high increase of the virtual view synthesis quality in immersive video applications and higher similarity to ground-truth after the refinement of estimated depth maps. The usefulness of the proposal was appreciated and confirmed by the experts of the ISO/IEC MPEG group for immersive video and the method became the MPEG Reference Software for the depth refinement. The implementation of the method is publicly available for other researchers.https://ieeexplore.ieee.org/document/9319165/Depth map refinementimmersive videovirtual navigationmultiview stereointer-view consistency
collection DOAJ
language English
format Article
sources DOAJ
author Dawid Mieloch
Adrian Dziembowski
Marek Domanski
spellingShingle Dawid Mieloch
Adrian Dziembowski
Marek Domanski
Depth Map Refinement for Immersive Video
IEEE Access
Depth map refinement
immersive video
virtual navigation
multiview stereo
inter-view consistency
author_facet Dawid Mieloch
Adrian Dziembowski
Marek Domanski
author_sort Dawid Mieloch
title Depth Map Refinement for Immersive Video
title_short Depth Map Refinement for Immersive Video
title_full Depth Map Refinement for Immersive Video
title_fullStr Depth Map Refinement for Immersive Video
title_full_unstemmed Depth Map Refinement for Immersive Video
title_sort depth map refinement for immersive video
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2021-01-01
description In this article, we propose a depth map refinement method that increases the quality of immersive video. The proposal highly enhances the inter-view consistency of depth maps (estimated or acquired by any method), crucial for achieving the required fidelity of the virtual view synthesis process. In the described method, only information from depth maps is used, as the use of texture can introduce errors in the refinement, mostly due to inter-view color inconsistencies and noise. In order to evaluate the performance of the proposal and compare it with the state of the art, three experiments were conducted. To test the influence of the refinement on the encoding of immersive video, four sets of depth maps (original, refined with the synthesis-based refinement, a bilateral filter, and with the proposal) were encoded with the MPEG Immersive Video (MIV) encoder. In the second experiment, in order to provide a direct evaluation of the accuracy of depth maps, the Middlebury database comparison was performed. In the third experiment, the temporal consistency of depth maps was assessed by measuring the efficiency of encoding of the virtual views. The experiments showed both a high increase of the virtual view synthesis quality in immersive video applications and higher similarity to ground-truth after the refinement of estimated depth maps. The usefulness of the proposal was appreciated and confirmed by the experts of the ISO/IEC MPEG group for immersive video and the method became the MPEG Reference Software for the depth refinement. The implementation of the method is publicly available for other researchers.
topic Depth map refinement
immersive video
virtual navigation
multiview stereo
inter-view consistency
url https://ieeexplore.ieee.org/document/9319165/
work_keys_str_mv AT dawidmieloch depthmaprefinementforimmersivevideo
AT adriandziembowski depthmaprefinementforimmersivevideo
AT marekdomanski depthmaprefinementforimmersivevideo
_version_ 1724180058186186752