DV-LOAM: Direct Visual LiDAR Odometry and Mapping

Self-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a two-staged dir...

Full description

Bibliographic Details
Main Authors: Wei Wang, Jun Liu, Chenjie Wang, Bin Luo, Cheng Zhang
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/13/16/3340
id doaj-246e88f9bbc449e1a61d39a2321461f0
record_format Article
spelling doaj-246e88f9bbc449e1a61d39a2321461f02021-08-26T14:18:06ZengMDPI AGRemote Sensing2072-42922021-08-01133340334010.3390/rs13163340DV-LOAM: Direct Visual LiDAR Odometry and MappingWei Wang0Jun Liu1Chenjie Wang2Bin Luo3Cheng Zhang4State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, ChinaState Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, ChinaState Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, ChinaState Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, ChinaState Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, ChinaSelf-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a two-staged direct visual odometry module, which consists of a frame-to-frame tracking step, and an improved sliding window based thinning step, is proposed to estimate the accurate pose of the camera while maintaining efficiency. Secondly, every time a keyframe is generated, a dynamic objects considered LiDAR mapping module is utilized to refine the pose of the keyframe to obtain higher positioning accuracy and better robustness. Finally, a Parallel Global and Local Search Loop Closure Detection (PGLS-LCD) module that combines visual Bag of Words (BoW) and LiDAR-Iris feature is applied for place recognition to correct the accumulated drift and maintain a globally consistent map. We conducted a large number of experiments on the public dataset and our mobile robot dataset to verify the effectiveness of each module in our framework. Experimental results show that the proposed algorithm achieves more accurate pose estimation than the state-of-the-art methods.https://www.mdpi.com/2072-4292/13/16/3340Simultaneous Localization and Mappingdirect visual-LiDAR odometryloop closure detection
collection DOAJ
language English
format Article
sources DOAJ
author Wei Wang
Jun Liu
Chenjie Wang
Bin Luo
Cheng Zhang
spellingShingle Wei Wang
Jun Liu
Chenjie Wang
Bin Luo
Cheng Zhang
DV-LOAM: Direct Visual LiDAR Odometry and Mapping
Remote Sensing
Simultaneous Localization and Mapping
direct visual-LiDAR odometry
loop closure detection
author_facet Wei Wang
Jun Liu
Chenjie Wang
Bin Luo
Cheng Zhang
author_sort Wei Wang
title DV-LOAM: Direct Visual LiDAR Odometry and Mapping
title_short DV-LOAM: Direct Visual LiDAR Odometry and Mapping
title_full DV-LOAM: Direct Visual LiDAR Odometry and Mapping
title_fullStr DV-LOAM: Direct Visual LiDAR Odometry and Mapping
title_full_unstemmed DV-LOAM: Direct Visual LiDAR Odometry and Mapping
title_sort dv-loam: direct visual lidar odometry and mapping
publisher MDPI AG
series Remote Sensing
issn 2072-4292
publishDate 2021-08-01
description Self-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a two-staged direct visual odometry module, which consists of a frame-to-frame tracking step, and an improved sliding window based thinning step, is proposed to estimate the accurate pose of the camera while maintaining efficiency. Secondly, every time a keyframe is generated, a dynamic objects considered LiDAR mapping module is utilized to refine the pose of the keyframe to obtain higher positioning accuracy and better robustness. Finally, a Parallel Global and Local Search Loop Closure Detection (PGLS-LCD) module that combines visual Bag of Words (BoW) and LiDAR-Iris feature is applied for place recognition to correct the accumulated drift and maintain a globally consistent map. We conducted a large number of experiments on the public dataset and our mobile robot dataset to verify the effectiveness of each module in our framework. Experimental results show that the proposed algorithm achieves more accurate pose estimation than the state-of-the-art methods.
topic Simultaneous Localization and Mapping
direct visual-LiDAR odometry
loop closure detection
url https://www.mdpi.com/2072-4292/13/16/3340
work_keys_str_mv AT weiwang dvloamdirectvisuallidarodometryandmapping
AT junliu dvloamdirectvisuallidarodometryandmapping
AT chenjiewang dvloamdirectvisuallidarodometryandmapping
AT binluo dvloamdirectvisuallidarodometryandmapping
AT chengzhang dvloamdirectvisuallidarodometryandmapping
_version_ 1721190134516285440