Policy-Gradient and Actor-Critic Based State Representation Learning for Safe Driving of Autonomous Vehicles

In this paper, we propose an environment perception framework for autonomous driving using state representation learning (SRL). Unlike existing Q-learning based methods for efficient environment perception and object detection, our proposed method takes the learning loss into account under determini...

Full description

Bibliographic Details
Main Authors: Abhishek Gupta, Ahmed Shaharyar Khwaja, Alagan Anpalagan, Ling Guan, Bala Venkatesh
Format: Article
Language:English
Published: MDPI AG 2020-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/21/5991
Description
Summary:In this paper, we propose an environment perception framework for autonomous driving using state representation learning (SRL). Unlike existing Q-learning based methods for efficient environment perception and object detection, our proposed method takes the learning loss into account under deterministic as well as stochastic policy gradient. Through a combination of variational autoencoder (VAE), deep deterministic policy gradient (DDPG), and soft actor-critic (SAC), we focus on uninterrupted and reasonably safe autonomous driving without steering off the track for a considerable driving distance. Our proposed technique exhibits learning in autonomous vehicles under complex interactions with the environment, without being explicitly trained on driving datasets. To ensure the effectiveness of the scheme over a sustained period of time, we employ a reward-penalty based system where a negative reward is associated with an unfavourable action and a positive reward is awarded for favourable actions. The results obtained through simulations on DonKey simulator show the effectiveness of our proposed method by examining the variations in policy loss, value loss, reward function, and cumulative reward for `VAE+DDPG’ and `VAE+SAC’ over the learning process.
ISSN:1424-8220