A Parallel Convolutional Neural Network for Pedestrian Detection

Pedestrian detection is a crucial task in many vision-based applications, such as video<br />surveillance, human activity analysis and autonomous driving. Recently, most of the existing<br />pedestrian detection frameworks only focus on the detection accuracy or model parameters.<br /...

Full description

Bibliographic Details
Main Authors: Mengya Zhu, Yiquan Wu
Format: Article
Language:English
Published: MDPI AG 2020-09-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/9/9/1478
Description
Summary:Pedestrian detection is a crucial task in many vision-based applications, such as video<br />surveillance, human activity analysis and autonomous driving. Recently, most of the existing<br />pedestrian detection frameworks only focus on the detection accuracy or model parameters.<br />However, how to balance the detection accuracy and model parameters, is still an open problem for<br />the practical application of pedestrian detection. In this paper, we propose a parallel, lightweight<br />framework for pedestrian detection, named ParallelNet. ParallelNet consists of four branches, each<br />of them learns different high-level semantic features. We fused them into one feature map as the<br />final feature representation. Subsequently, the Fire module, which includes Squeeze and Expand<br />parts, is employed for reducing the model parameters. Here, we replace some convolution modules<br />in the backbone with Fire modules. Finally, the focal loss is led into the ParallelNet for end-to-end<br />training. Experimental results on the Caltech–Zhang dataset and KITTI dataset show that:<br />Compared with the single-branch network, such as ResNet and SqueezeNet, ParallelNet has<br />improved detection accuracy with fewer model parameters and lower Giga Floating Point<br />Operations (GFLOPs).
ISSN:2079-9292