Infrared and Visible Image Registration Based on Scale-Invariant PIIFD Feature and Locality Preserving Matching

Registration of multi-sensor data is a prerequisite for multimodal image analysis such as image fusion. This paper focuses on the problem of infrared and visible image registration, which has played an important role for the purpose of enhancing visual perception. Existing methods based on multimoda...

Full description

Bibliographic Details
Main Authors: Qinglei Du, Aoxiang Fan, Yong Ma, Fan Fan, Jun Huang, Xiaoguang Mei
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8502747/
Description
Summary:Registration of multi-sensor data is a prerequisite for multimodal image analysis such as image fusion. This paper focuses on the problem of infrared and visible image registration, which has played an important role for the purpose of enhancing visual perception. Existing methods based on multimodal feature descriptor such as partial intensity invariant feature descriptor (PIIFD) usually fail in correctly aligning infrared and visible image pairs, due to their significant differences in resolution and appearance. In this paper, we propose a scale-invariant PIIFD (SI-PIIFD) feature and a robust feature matching method to address this problem. Specifically, we first extract corner points as control point candidates since they are usually sufficient and uniformly distributed across the image domain. Then, the SI-PIIFDs are calculated for all corner points and matched according to the descriptor similarity together with a locality preserving geometric constraint. Subsequently, we model the spatial transformation between an infrared and visible image pair with an affine function and introduce a robust Bayesian framework to estimate it from the SI-PIIFD feature matches even if they contaminated by false matches. Finally, the backward approach is chosen for image transformation to avoid holes and overlaps in the output image. Extensive experiments on a challenging dataset with comparisons to other state-of-the-arts demonstrate the effectiveness of the proposed method, both in terms of accuracy and efficiency.
ISSN:2169-3536