Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing
To meet the explosive growth of mobile traffic, the 5G network is designed to be flexible and support multi-access edge computing (MEC), thereby improving the end-to-end quality of service (QoS). In particular, 5G network slicing, which allows a physical infrastructure to split into multiple logical...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9400356/ |
id |
doaj-5e6261e9846546208887169025ef8bd8 |
---|---|
record_format |
Article |
spelling |
doaj-5e6261e9846546208887169025ef8bd82021-04-16T23:00:35ZengIEEEIEEE Access2169-35362021-01-019561785619010.1109/ACCESS.2021.30724359400356Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network SlicingYohan Kim0https://orcid.org/0000-0002-6741-1803Hyuk Lim1https://orcid.org/0000-0002-9926-3913School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology (GIST), Gwangju, Republic of KoreaAI Graduate School, Gwangju Institute of Science and Technology (GIST), Gwangju, Republic of KoreaTo meet the explosive growth of mobile traffic, the 5G network is designed to be flexible and support multi-access edge computing (MEC), thereby improving the end-to-end quality of service (QoS). In particular, 5G network slicing, which allows a physical infrastructure to split into multiple logical networks, keeps the balance of network resource allocation among different service types with on-demand resource requests. However, achieving effective resource allocation across the end-to-end network is difficult due to the dynamic characteristics of slicing requests such as uncertain real-time resource demand and heterogeneous requirements. In this paper, we develop a reinforcement learning (RL)-based dynamic resource allocation framework for end-to-end network slicing with heterogeneous requirements in multi-layer MEC environments. We first design a hierarchical MEC architecture and formulate a resource allocation problem for the end-to-end network slicing as an optimization problem using the Markov decision process (MDP). Using proximal policy optimization (PPO), we develop independently-collaborative and jointly-collaborative dynamic resource allocation algorithms to maximize resource efficiency while satisfying the QoS of slices. Experimental results show that the proposed algorithms can recognize the characteristics of slice requests and coming resource demands and efficiently allocate resources with a high QoS satisfaction rate.https://ieeexplore.ieee.org/document/9400356/5Gnetwork slicingmulti-access edge computingnetwork resource managementmulti-agent reinforcement learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yohan Kim Hyuk Lim |
spellingShingle |
Yohan Kim Hyuk Lim Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing IEEE Access 5G network slicing multi-access edge computing network resource management multi-agent reinforcement learning |
author_facet |
Yohan Kim Hyuk Lim |
author_sort |
Yohan Kim |
title |
Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing |
title_short |
Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing |
title_full |
Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing |
title_fullStr |
Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing |
title_full_unstemmed |
Multi-Agent Reinforcement Learning-Based Resource Management for End-to-End Network Slicing |
title_sort |
multi-agent reinforcement learning-based resource management for end-to-end network slicing |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2021-01-01 |
description |
To meet the explosive growth of mobile traffic, the 5G network is designed to be flexible and support multi-access edge computing (MEC), thereby improving the end-to-end quality of service (QoS). In particular, 5G network slicing, which allows a physical infrastructure to split into multiple logical networks, keeps the balance of network resource allocation among different service types with on-demand resource requests. However, achieving effective resource allocation across the end-to-end network is difficult due to the dynamic characteristics of slicing requests such as uncertain real-time resource demand and heterogeneous requirements. In this paper, we develop a reinforcement learning (RL)-based dynamic resource allocation framework for end-to-end network slicing with heterogeneous requirements in multi-layer MEC environments. We first design a hierarchical MEC architecture and formulate a resource allocation problem for the end-to-end network slicing as an optimization problem using the Markov decision process (MDP). Using proximal policy optimization (PPO), we develop independently-collaborative and jointly-collaborative dynamic resource allocation algorithms to maximize resource efficiency while satisfying the QoS of slices. Experimental results show that the proposed algorithms can recognize the characteristics of slice requests and coming resource demands and efficiently allocate resources with a high QoS satisfaction rate. |
topic |
5G network slicing multi-access edge computing network resource management multi-agent reinforcement learning |
url |
https://ieeexplore.ieee.org/document/9400356/ |
work_keys_str_mv |
AT yohankim multiagentreinforcementlearningbasedresourcemanagementforendtoendnetworkslicing AT hyuklim multiagentreinforcementlearningbasedresourcemanagementforendtoendnetworkslicing |
_version_ |
1721524350370185216 |