Lightweight Reinforcement Learning for Energy Efficient Communications in Wireless Sensor Networks

High-density communications in wireless sensor networks (WSNs) demand for new approaches to meet stringent energy and spectrum requirements. We turn to reinforcement learning, a prominent method in artificial intelligence, to design an energy-preserving MAC protocol, with the aim to extend the netwo...

Full description

Bibliographic Details
Main Authors: Claudio Savaglio, Pasquale Pace, Gianluca Aloi, Antonio Liotta, Giancarlo Fortino
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8657937/
Description
Summary:High-density communications in wireless sensor networks (WSNs) demand for new approaches to meet stringent energy and spectrum requirements. We turn to reinforcement learning, a prominent method in artificial intelligence, to design an energy-preserving MAC protocol, with the aim to extend the network lifetime. Our QL-MAC protocol is derived from Q-learning, which iteratively tweaks the MAC parameters through a trial-and-error process to converge to a low energy state. This has a dual benefit of 1) solving this minimization problem without the need of predetermining the system model and 2) providing a self-adaptive protocol to topological and other external changes. QL-MAC self-adjusts the WSN node duty-cycle, reducing energy consumption without detrimental effects on the other network parameters. This is achieved by adjusting the radio sleeping and active periods based on traffic predictions and transmission state of neighboring nodes. Our findings are corroborated by an extensive set of experiments carried out on off-the-shelf devices, alongside large-scale simulations.
ISSN:2169-3536