Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving

This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traff...

Full description

Bibliographic Details
Main Authors: Shangbin Wu, Yue Wang, Lu Bai
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9094184/
id doaj-624b080b23614c079a36393204771a05
record_format Article
spelling doaj-624b080b23614c079a36393204771a052021-03-30T02:58:32ZengIEEEIEEE Access2169-35362020-01-018936719368110.1109/ACCESS.2020.29950579094184Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power SavingShangbin Wu0https://orcid.org/0000-0003-4734-5295Yue Wang1Lu Bai2https://orcid.org/0000-0003-1687-0863Samsung R&D Institute U.K., Staines-upon-Thames, U.K.Samsung R&D Institute U.K., Staines-upon-Thames, U.K.School of Cyber Science and Technology, Beihang University, Beijing, ChinaThis paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traffic data in the literature. Then, a threshold-based adaptive power saving method is discussed, serving as the benchmark. Next, a BS power control framework is created using Q-learning. The action-state function of the Q-learning is approximated via a deep convolutional neural network (DCNN). The DCNN-Q agent is designed to control the loads of cells in order to adapt to NTV variations and reduce power consumption. The DCNN-Q power saving framework is trained and simulated in a heterogeneous network including macrocells and microcells. It can be concluded that with the proposed DCNN-Q method, the power saving outperforms the threshold-based method.https://ieeexplore.ieee.org/document/9094184/Power savingdeep convolutional neural networkreinforcement learning
collection DOAJ
language English
format Article
sources DOAJ
author Shangbin Wu
Yue Wang
Lu Bai
spellingShingle Shangbin Wu
Yue Wang
Lu Bai
Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
IEEE Access
Power saving
deep convolutional neural network
reinforcement learning
author_facet Shangbin Wu
Yue Wang
Lu Bai
author_sort Shangbin Wu
title Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
title_short Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
title_full Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
title_fullStr Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
title_full_unstemmed Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving
title_sort deep convolutional neural network assisted reinforcement learning based mobile network power saving
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2020-01-01
description This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traffic data in the literature. Then, a threshold-based adaptive power saving method is discussed, serving as the benchmark. Next, a BS power control framework is created using Q-learning. The action-state function of the Q-learning is approximated via a deep convolutional neural network (DCNN). The DCNN-Q agent is designed to control the loads of cells in order to adapt to NTV variations and reduce power consumption. The DCNN-Q power saving framework is trained and simulated in a heterogeneous network including macrocells and microcells. It can be concluded that with the proposed DCNN-Q method, the power saving outperforms the threshold-based method.
topic Power saving
deep convolutional neural network
reinforcement learning
url https://ieeexplore.ieee.org/document/9094184/
work_keys_str_mv AT shangbinwu deepconvolutionalneuralnetworkassistedreinforcementlearningbasedmobilenetworkpowersaving
AT yuewang deepconvolutionalneuralnetworkassistedreinforcementlearningbasedmobilenetworkpowersaving
AT lubai deepconvolutionalneuralnetworkassistedreinforcementlearningbasedmobilenetworkpowersaving
_version_ 1724184302254555136