Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation

碩士 === 國立臺灣大學 === 資訊管理學研究所 === 106 === In the past few years, deep reinforcement learning has been proven that can solve problems which have complex states like video games or board games. The next step of intelligent agents would be able to generalize between tasks, using prior experience to pick u...

Full description

Bibliographic Details
Main Authors: Shu-Hsuan Hsu, 許書軒
Other Authors: Bing-Yu Chen
Format: Others
Language:zh-TW
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/xr95cu
id ndltd-TW-106NTU05396048
record_format oai_dc
spelling ndltd-TW-106NTU053960482019-07-25T04:46:48Z http://ndltd.ncl.edu.tw/handle/xr95cu Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation 利用對抗式目標與資料擴增於深度強化學習間的遷移 Shu-Hsuan Hsu 許書軒 碩士 國立臺灣大學 資訊管理學研究所 106 In the past few years, deep reinforcement learning has been proven that can solve problems which have complex states like video games or board games. The next step of intelligent agents would be able to generalize between tasks, using prior experience to pick up new skills more quickly. However, most reinforcement learning algorithms for now are often suffering from catastrophic forgetting even when facing a very similar target task. Our approach enables the agents to generalize knowledge from a single source task, and boost the learning progress with a semi-supervised learning method when facing a new task. We evaluate this approach on Atari games, a popular reinforcement learning benchmark, and show that it outperforms common baselines based on pre-training and fine-tuning. Bing-Yu Chen 陳炳宇 2018 學位論文 ; thesis 32 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 資訊管理學研究所 === 106 === In the past few years, deep reinforcement learning has been proven that can solve problems which have complex states like video games or board games. The next step of intelligent agents would be able to generalize between tasks, using prior experience to pick up new skills more quickly. However, most reinforcement learning algorithms for now are often suffering from catastrophic forgetting even when facing a very similar target task. Our approach enables the agents to generalize knowledge from a single source task, and boost the learning progress with a semi-supervised learning method when facing a new task. We evaluate this approach on Atari games, a popular reinforcement learning benchmark, and show that it outperforms common baselines based on pre-training and fine-tuning.
author2 Bing-Yu Chen
author_facet Bing-Yu Chen
Shu-Hsuan Hsu
許書軒
author Shu-Hsuan Hsu
許書軒
spellingShingle Shu-Hsuan Hsu
許書軒
Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
author_sort Shu-Hsuan Hsu
title Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
title_short Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
title_full Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
title_fullStr Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
title_full_unstemmed Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
title_sort transferring deep reinforcement learning with adversarial objective and augmentation
publishDate 2018
url http://ndltd.ncl.edu.tw/handle/xr95cu
work_keys_str_mv AT shuhsuanhsu transferringdeepreinforcementlearningwithadversarialobjectiveandaugmentation
AT xǔshūxuān transferringdeepreinforcementlearningwithadversarialobjectiveandaugmentation
AT shuhsuanhsu lìyòngduìkàngshìmùbiāoyǔzīliàokuòzēngyúshēndùqiánghuàxuéxíjiāndeqiānyí
AT xǔshūxuān lìyòngduìkàngshìmùbiāoyǔzīliàokuòzēngyúshēndùqiánghuàxuéxíjiāndeqiānyí
_version_ 1719230019485564928