Dynamic Energy Management for Perpetual Operation of Energy Harvesting Wireless Sensor Node Using Fuzzy Q-Learning

In an energy harvesting wireless sensor node (EHWSN), balance of energy harvested and consumption using dynamic energy management to achieve the goal of perpetual operation is one of the most important research topics. In this study, a novel fuzzy Q-learning (FQL)-based dynamic energy management (FQ...

Full description

Bibliographic Details
Main Authors: Hsu, R.C (Author), Lin, T.-H (Author), Su, P.-C (Author)
Format: Article
Language:English
Published: MDPI 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02927nam a2200457Ia 4500
001 10.3390-en15093117
008 220517s2022 CNT 000 0 und d
020 |a 19961073 (ISSN) 
245 1 0 |a Dynamic Energy Management for Perpetual Operation of Energy Harvesting Wireless Sensor Node Using Fuzzy Q-Learning 
260 0 |b MDPI  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.3390/en15093117 
520 3 |a In an energy harvesting wireless sensor node (EHWSN), balance of energy harvested and consumption using dynamic energy management to achieve the goal of perpetual operation is one of the most important research topics. In this study, a novel fuzzy Q-learning (FQL)-based dynamic energy management (FQLDEM) is proposed in adapting its policy to the time varying environment, regarding both the harvested energy and the energy consumption of the WSN. The FQLDEM applies Q-learning to train, evaluate, and update the fuzzy rule base and then uses the fuzzy inference system (FIS) for determining the working duty cycle of the sensor of the EHWSN. Through the interaction with the energy harvesting environment, the learning agent of the FQL will be able to find the appropriate fuzzy rules in adapting the working duty cycle for the goal of energy neutrality such that the objective of perpetual operation of the EHWSN can be achieved. Experimental results show that the FQLDEM can maintain the battery charge status at a higher level than other existing methods did, such as the reinforcement learning (RL) method and dynamic duty cycle adaption (DDCA), and achieve the perpetual operation of the EHWSN. Furthermore, experimental results for required on-demand sensing measurements exhibit that the FQLDEM method can be slowly upgraded to meet 65% of the service quality control requirements in the early stage, which outperforms the RL-based and DDCA methods. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. 
650 0 4 |a Balance of energies 
650 0 4 |a Decision trees 
650 0 4 |a Duty-cycle 
650 0 4 |a dynamic energy management 
650 0 4 |a Dynamic energy managements 
650 0 4 |a Energy 
650 0 4 |a Energy harvesting 
650 0 4 |a energy harvesting wireless sensor node 
650 0 4 |a Energy harvesting wireless sensor node 
650 0 4 |a Energy management 
650 0 4 |a energy neutrality 
650 0 4 |a Energy neutrality 
650 0 4 |a Energy utilization 
650 0 4 |a Fuzzy inference 
650 0 4 |a fuzzy Q-learning 
650 0 4 |a Fuzzy rules 
650 0 4 |a Fuzzy-Q-learning 
650 0 4 |a Learning algorithms 
650 0 4 |a perpetual operation 
650 0 4 |a Perpetual operation 
650 0 4 |a Quality control 
650 0 4 |a Reinforcement learning 
650 0 4 |a Research topics 
650 0 4 |a Sensor nodes 
650 0 4 |a Wireless sensor node 
700 1 |a Hsu, R.C.  |e author 
700 1 |a Lin, T.-H.  |e author 
700 1 |a Su, P.-C.  |e author 
773 |t Energies