Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving

This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traff...

Full description

Bibliographic Details
Main Authors: Shangbin Wu, Yue Wang, Lu Bai
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9094184/
Description
Summary:This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traffic data in the literature. Then, a threshold-based adaptive power saving method is discussed, serving as the benchmark. Next, a BS power control framework is created using Q-learning. The action-state function of the Q-learning is approximated via a deep convolutional neural network (DCNN). The DCNN-Q agent is designed to control the loads of cells in order to adapt to NTV variations and reduce power consumption. The DCNN-Q power saving framework is trained and simulated in a heterogeneous network including macrocells and microcells. It can be concluded that with the proposed DCNN-Q method, the power saving outperforms the threshold-based method.
ISSN:2169-3536