Sparse Point Cloud Registration Network with Semantic Supervision in Wilderness Scenes

The registration of laser point clouds in complex conditions in wilderness scenes is an important aspect in the research field of autonomous vehicle navigation. It serves as the foundation for solving problems such as environment reconstruction, map construction, navigation and positioning, and pose...

Full description

Bibliographic Details
Published in:Elektronika ir Elektrotechnika
Main Authors: Zhichao Zhang, Feng Lu, Youchun Xu, Jinsheng Chen, Yulin Ma
Format: Article
Language:English
Published: Kaunas University of Technology 2024-03-01
Subjects:
Online Access:https://eejournal.ktu.lt/index.php/elt/article/view/35996
Description
Summary:The registration of laser point clouds in complex conditions in wilderness scenes is an important aspect in the research field of autonomous vehicle navigation. It serves as the foundation for solving problems such as environment reconstruction, map construction, navigation and positioning, and pose estimation during the motion process of autonomous vehicles using laser radar sensors. Due to the sparse structured features, uneven point cloud density, and high noise levels in wilderness scenes, achieving reliable and accurate point cloud registration is challenging. In this paper, we propose a semantic-supervised sparse point cloud registration network (S3PCRNet) aiming to achieve effective registration of laser point clouds in wilderness large-scale scenes. Firstly, a local feature aggregation module is designed to extract the local structural features of the point cloud. Then, based on rotation position encoding, a randomly grouped self-attention mechanism is proposed to obtain the global features of the point cloud through learning. A semantic information weight matrix is calculated to filter out negligible points. Subsequently, a semantic fusion feature module is utilised to find reliable correspondences between point clouds. Finally, the proposed method is trained and evaluated on both the RELLIS-3D dataset and a self-made Off-road-3D dataset.
ISSN:1392-1215
2029-5731