Habib Ullah Manzoor

and 3 more

Habib Ullah Manzoor

and 3 more

The integration of artificial intelligence (AI) into energy networks significantly advanced short-term forecasting, particularly in smart meter applications. However, as distributed energy resources proliferated and energy systems grew in complexity, traditional centralized approaches to data analysis became insufficient in addressing privacy-preserving challenges. Federated learning (FL) emerged as a promising solution, leveraging distributed data sources while safeguarding user privacy. Nonetheless, FL encountered inherent vulnerabilities to adversarial attacks during model training, undermining its reliability and effectiveness. Existing techniques to eliminate these attacks often required additional frameworks for detection, imposing an added burden on devices. To address this issue, we proposed a novel method called federated random layer aggregation (FedRLA). It aggregated only one randomly chosen neural network layer on the server in a privacy-aware manner, leaving the remaining layers unchanged. FedRLA exhibited superior resilience against adversarial attacks by confining attackers to a single neural network layer. Our simulations, focusing on household energy consumption, demonstrated that FedRLA achieved 3.56 times less data transmission compared to FedAvg during global model training. This enhanced efficiency translated to improved energy usage and resource conservation. Furthermore, FedRLA performed better in the presence of differential privacy under attack and no attack conditions.

Habib Ullah Manzoor

and 2 more

Federated Learning (FL) in load forecasting improves predictive accuracy by leveraging data from distributed load networks while preserving data privacy. However, the heterogeneous nature of smart grid load forecasting poses significant challenges to conventional FL methods, which are also unsuitable for resource-constrained devices. To address these issues, we propose a novel Lightweight Single Layer Aggregation (LSLA) framework tailored for smart grid networks. The LSLA framework mitigates the issue of data heterogeneity in load forecasting by emphasizing local learning and partially using updates from local devices for model aggregation. Additionally, the framework is optimized for resource-constrained devices by introducing a stopping criterion during model training and weight quantization. Our results show that after quantization, there is an acceptable degradation of 0.01% in Mean Absolute Error (MAE). Compared to traditional FL, up to a 1000fold reduction in communication overhead is achieved with LSLA. Moreover, with an 8-bit fixed-point data representation of neural network weights, a 75% reduction in storage/memory requirement is achieved, and computational cost is reduced due to the replacement of complex floating-point computational units with simpler fixed-point counterparts. By addressing data heterogeneity and minimizing data storage, computation, and communication overheads, our novel framework is well-suited for resource-constrained devices in smart grid networks.