Combining Q-Learning with Hybrid Learning Approach in RoboCup

碩士 === 國立臺北科技大學 === 電資碩士班 === 101 === RoboCup is an international competition developed in 1997. The mission is “By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most rece...

Full description

Bibliographic Details
Main Authors: Hsiu-Chen Liu, 劉繡禎
Other Authors: Jong-Yih Kuo
Format: Others
Language:zh-TW
Published: 2013
Online Access:http://ndltd.ncl.edu.tw/handle/s2z5ry
Description
Summary:碩士 === 國立臺北科技大學 === 電資碩士班 === 101 === RoboCup is an international competition developed in 1997. The mission is “By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup”. For academic, RoboCup provides an excellent test bed for machine learning. As in a soccer game, environment states are constantly changing. Therefore, how to make a soccer agent learn autonomously to act with the best responses has becomes an important issue. The paper “Applying Hybrid Learning Approach to RoboCup''s Strategy” discusses the hybrid learning approach in this field. In this paper, to carry on the concept, we continue to apply the hybrid learning approach for the coach agent; while for the player agent, we apply the Q-Learning method. Furthermore, in order to solve the excessive environment state which slows down the learning rate, here we use fuzzy-state and fuzzy-rule to decrease the state space and to simplify the State-Action Table of Q-Learning. Finally, we build this soccer team that coach agent and player agent both have learning ability in RoboCup Soccer simulator. Through experiments, we analyze and compare the learning effects and the efficiency of execution.