KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning
Recently, deep reinforcement learning (RL) algorithms have achieved significant progress in the multi-agent domain. However, training for increasingly complex tasks would be time-consuming and resource intensive. To alleviate this problem, efficient leveraging of historical experience is essential,...
Main Authors: | Zijian Gao, Kele Xu, Bo Ding, Huaimin Wang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-08-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/23/8/1043 |
Similar Items
-
Knowledge Management and Reuse in Virtual Learning Communities
by: Houda Sekkal, et al.
Published: (2019-08-01) -
Layer-Level Knowledge Distillation for Deep Neural Network Learning
by: Hao-Ting Li, et al.
Published: (2019-05-01) -
Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks
by: Fan, W., et al.
Published: (2022) -
Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks
by: Fan, W., et al.
Published: (2022) -
Viewpoint robust knowledge distillation for accelerating vehicle re-identification
by: Yi Xie, et al.
Published: (2021-07-01)