Deep Federated Adaptation: An Adaptative Residential Load Forecasting Approach with Federated Learning

Residential-level short-term load forecasting (STLF) is significant for power system oper-ation. Data-driven forecasting models, especially machine-learning-based models, are sensitive to the amount of data. However, privacy and security concerns raised by supervision departments and users limit the...

Full description

Bibliographic Details
Main Authors: Shi, Y. (Author), Xu, X. (Author)
Format: Article
Language:English
Published: MDPI 2022
Subjects:
Online Access:View Fulltext in Publisher
Description
Summary:Residential-level short-term load forecasting (STLF) is significant for power system oper-ation. Data-driven forecasting models, especially machine-learning-based models, are sensitive to the amount of data. However, privacy and security concerns raised by supervision departments and users limit the data for sharing. Meanwhile, the limited data from the newly built houses are not sufficient to support building a powerful model. Another problem is that the data from different houses are in a non-identical and independent distribution (non-IID), which makes the general model fail in predicting accurate load for the specific house. Even though we can build a model corresponding to each house, it costs a large computation time. We first propose a federated transfer learning approach applied in STLF, deep federated adaptation (DFA), to deal with the aforemen-tioned problems. This approach adopts the federated learning architecture to train a global model without undermining privacy, and then the model leverage multiple kernel variant of maximum mean discrepancies (MK-MMD) to fine-tune the global model, which makes the model adapted to the specific house’s prediction task. Experimental results on the real residential datasets show that DFA has the best forecasting performance compared with other baseline models and the federated architecture of DFA has a remarkable superiority in computation time. The framework of DFA is extended with alternative transfer learning methods and all of them achieve good performances on STLF. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
ISBN:14248220 (ISSN)
DOI:10.3390/s22093264