A Simple Vehicle Counting System Using Deep Learning with YOLOv3 Model

Deep Learning is a popular Machine Learning algorithm that is widely used in many areas in current daily life. Its robust performance and ready-to-use frameworks and architectures enables many people to develop various Deep Learning-based software or systems to support human tasks and activities. Tr...

Full description

Bibliographic Details
Main Author: Muhammad Fachrie
Format: Article
Language:Indonesian
Published: Ikatan Ahli Indormatika Indonesia 2020-06-01
Series:Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
Subjects:
Online Access:http://jurnal.iaii.or.id/index.php/RESTI/article/view/1871
Description
Summary:Deep Learning is a popular Machine Learning algorithm that is widely used in many areas in current daily life. Its robust performance and ready-to-use frameworks and architectures enables many people to develop various Deep Learning-based software or systems to support human tasks and activities. Traffic monitoring is one area that utilizes Deep Learning for several purposes. By using cameras installed in some spots on the roads, many tasks such as vehicle counting, vehicle identification, traffic violation monitoring, vehicle speed monitoring, etc. can be realized. In this paper, we discuss a Deep Learning implementation to create a vehicle counting system without having to track the vehicles movements. To enhance the system performance and to reduce time in deploying Deep Learning architecture, hence pretrained model of YOLOv3 is used in this research due to its good performance and moderate computational time in object detection. This research aims to create a simple vehicle counting system to help human in classify and counting the vehicles that cross the street. The counting is based on four types of vehicle, i.e. car, motorcycle, bus, and truck, while previous research counts the car only. As the result, our proposed system capable to count the vehicles crossing the road based on video captured by camera with the highest accuracy of 97.72%.
ISSN:2580-0760