Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine

In this paper, we focus on the challenges of training efficiency, the designation of reward functions, and generalization in reinforcement learning for visual navigation and propose a regularized extreme learning machine-based inverse reinforcement learning approach (RELM-IRL) to improve the navigat...

Full description

Bibliographic Details
Main Authors: Qiang Fang, Wenzhuo Zhang, Xitong Wang
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Electronics
Subjects:
A3C
Online Access:https://www.mdpi.com/2079-9292/10/16/1997
Description
Summary:In this paper, we focus on the challenges of training efficiency, the designation of reward functions, and generalization in reinforcement learning for visual navigation and propose a regularized extreme learning machine-based inverse reinforcement learning approach (RELM-IRL) to improve the navigation performance. Our contributions are mainly three-fold: First, a framework combining extreme learning machine with inverse reinforcement learning is presented. This framework can improve the sample efficiency and obtain the reward function directly from the image information observed by the agent and improve the generation for the new target and the new environment. Second, the extreme learning machine is regularized by multi-response sparse regression and the leave-one-out method, which can further improve the generalization ability. Simulation experiments in the AI-THOR environment showed that the proposed approach outperformed previous end-to-end approaches, thus, demonstrating the effectiveness and efficiency of our approach.
ISSN:2079-9292