Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor
In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusi...
Main Authors: | Shaorong Xie, Chao Pan, Yaxin Peng, Ke Liu, Shihui Ying |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-05-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/10/2870 |
Similar Items
-
Learning to Fuse Multiscale Features for Visual Place Recognition
by: Jun Mao, et al.
Published: (2019-01-01) -
LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames
by: Hao Fu, et al.
Published: (2021-09-01) -
Targetless Camera-LiDAR Calibration in Unstructured Environments
by: Miguel Angel Munoz-Banon, et al.
Published: (2020-01-01) -
Enet-CRF-Lidar: Lidar and Camera Fusion for Multi-Scale Object Recognition
by: Qitian Deng, et al.
Published: (2019-01-01) -
Target Fusion Detection of LiDAR and Camera Based on the Improved YOLO Algorithm
by: Jian Han, et al.
Published: (2018-10-01)