Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)

This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge groupin...

Full description

Bibliographic Details
Main Authors: Vijay Kakani, Hakil Kim, Mahendar Kumbham, Donghun Park, Cheng-Bin Jin, Van Huan Nguyen
Format: Article
Language:English
Published: MDPI AG 2019-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/19/15/3369
id doaj-af4c4a9d3d344b568b06879c9f092808
record_format Article
spelling doaj-af4c4a9d3d344b568b06879c9f0928082020-11-25T00:55:17ZengMDPI AGSensors1424-82202019-07-011915336910.3390/s19153369s19153369Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)Vijay Kakani0Hakil Kim1Mahendar Kumbham2Donghun Park3Cheng-Bin Jin4Van Huan Nguyen5Information and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, KoreaInformation and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, KoreaValeo Vision Systems, Dunmore Road, Tuam, Co. Galway H54, IrelandInformation and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, KoreaInformation and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, KoreaFaculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City 758307, VietnamThis paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. A novel straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that novel loss to optimize the lens-distortion parameters using the Levenberg&#8722;Marquardt (LM) optimization approach. The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. In this study, an investigation was carried out on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical <inline-formula> <math display="inline"> <semantics> <mi>&#947;</mi> </semantics> </math> </inline-formula>-residual rectification factor. The quantitative comparisons were carried out between the proposed method and traditional OpenCV method as well as contemporary state-of-the-art self-calibration techniques on KITTI dataset with synthetically generated distortion ranges. Famous image consistency metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and position error in salient points estimation were employed for the performance evaluations. Finally, for a better performance validation of the proposed system on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.https://www.mdpi.com/1424-8220/19/15/3369advanced driver-assistance system (ADAS)larger field-of-view (FOV)self-calibrationradial distortionsparameter sharingmodel-specific empirical γ-residual rectification factor
collection DOAJ
language English
format Article
sources DOAJ
author Vijay Kakani
Hakil Kim
Mahendar Kumbham
Donghun Park
Cheng-Bin Jin
Van Huan Nguyen
spellingShingle Vijay Kakani
Hakil Kim
Mahendar Kumbham
Donghun Park
Cheng-Bin Jin
Van Huan Nguyen
Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
Sensors
advanced driver-assistance system (ADAS)
larger field-of-view (FOV)
self-calibration
radial distortions
parameter sharing
model-specific empirical γ-residual rectification factor
author_facet Vijay Kakani
Hakil Kim
Mahendar Kumbham
Donghun Park
Cheng-Bin Jin
Van Huan Nguyen
author_sort Vijay Kakani
title Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
title_short Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
title_full Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
title_fullStr Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
title_full_unstemmed Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
title_sort feasible self-calibration of larger field-of-view (fov) camera sensors for the advanced driver-assistance system (adas)
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2019-07-01
description This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. A novel straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that novel loss to optimize the lens-distortion parameters using the Levenberg&#8722;Marquardt (LM) optimization approach. The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. In this study, an investigation was carried out on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical <inline-formula> <math display="inline"> <semantics> <mi>&#947;</mi> </semantics> </math> </inline-formula>-residual rectification factor. The quantitative comparisons were carried out between the proposed method and traditional OpenCV method as well as contemporary state-of-the-art self-calibration techniques on KITTI dataset with synthetically generated distortion ranges. Famous image consistency metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and position error in salient points estimation were employed for the performance evaluations. Finally, for a better performance validation of the proposed system on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.
topic advanced driver-assistance system (ADAS)
larger field-of-view (FOV)
self-calibration
radial distortions
parameter sharing
model-specific empirical γ-residual rectification factor
url https://www.mdpi.com/1424-8220/19/15/3369
work_keys_str_mv AT vijaykakani feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
AT hakilkim feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
AT mahendarkumbham feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
AT donghunpark feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
AT chengbinjin feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
AT vanhuannguyen feasibleselfcalibrationoflargerfieldofviewfovcamerasensorsfortheadvanceddriverassistancesystemadas
_version_ 1725230962201067520