Sultan Uddin Khan

and 2 more

Deep learning (DL) has gained prominence as an effective approach for enhancing the efficiency of various applications including smart grids (SG). Although these models excel significantly in the classification tasks of power quality disturbances, their vulnerability to trojan attacks introduces potential complications. In this paper, we introduce two novel algorithms for executing trojan attacks on DL models handling time series data in SG, tailored for both white-box and black-box. For white-box, our algorithm titled 'Sneaky Spectral Strike (S 3)' utilizes the frequency domain and trigger optimization to perform trojan attacks, which demonstrates a remarkable average fooling rate of 99.9% across various DL models. The algorithm also balances the signal-to-noise ratio, trojan model accuracy on clean data, and fooling rate to be highly effective in fooling DL model and imperceptible to human observers in the power control center (PCC). For black-box, we propose a novel algorithm, 'Lite Datanet Sneaky Spectral Strike', that integrates a simple DL model with a small sample dataset to create trojan triggers that are highly effective, stealthy, and transferable to the DL model deployed in PCC. This approach achieves a 99.86% average fooling rate for different advanced DL models, highlighting the effectiveness of resource-efficient strategies in DL-based SG. Both algorithms underscore the potential vulnerabilities in DL models used in SG , and mark a significant advancement in adversarial machine learning.

Mohammed Mynuddin

and 6 more

In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of cyber threats, the need for a highly efficient NIDS has become paramount for ensuring robust network security. Typically, intrusion detection systems utilize either a pattern-matching system or leverage machine learning for anomaly detection. While pattern-matching approaches tend to suffer from a high false positive rate (FPR), machine learning-based systems, such as SVM and KNN, predict potential attacks by recognizing distinct features. However, these models often operate on a limited set of features, resulting in lower accuracy and higher FPR. In our research, we introduced a deep learning model that harnesses the strengths of a Convolutional Neural Network (CNN) combined with a Bidirectional LSTM (Bi-LSTM) to learn spatial and temporal data features. The model, evaluated using the NSL-KDD dataset, exhibited a high detection rate with a minimal false positive rate. To enhance accuracy, K-fold cross-validation was employed in training the model. This paper showcases the effectiveness of the CNN with Bi-LSTM algorithm in achieving superior performance across metrics like accuracy, F1-score, precision, and recall. The binary classification model trained on the NSLKDD dataset demonstrates outstanding performance, achieving a high accuracy of 99.5% after 10-fold cross-validation, with an average accuracy of 99.3%. The model exhibits remarkable detection rates (0.994) and a low false positive rate (0.13). In the multiclass setting, the model maintains exceptional precision (99.25%), reaching a peak accuracy of 99.59% for k-value=10. Notably, the Detection Rate for k-value=10 is 99.43%, and the mean False Positive Rate is calculated as 0.214925.

Sultan Uddin Khan

and 3 more

The utilization of deep learning models has been widely recognized for its significant contribution to the enhancement of smart grid operations, particularly in the domain of power quality disturbance (PQD) classification. Nevertheless, the emergence of vulnerabilities like targeted universal adversarial attacks can significantly undermine the reliability and security of deep learning models. These attacks can exploit the model’s weaknesses, causing it to misclassify PQDs with potentially catastrophic consequences. In our previous research, we for the first time examined the vulnerability of deep learning models to targeted universal adversarial attacks on time series data in smart grids by introducing a novel algorithm that effectively attacks by maintaining a trade-off between fooling rate and imperceptibility. While this attack method demonstrated notable efficacy, it also emphasized the pressing need for robust defensive mechanisms to safeguard these critical systems. This paper provides a thorough examination and evaluation of different defense strategies, specifically adversarial training, defensive distillation, and feature squeezing, in order to identify the most effective method for mitigating targeted universal adversarial (TUA) attacks on time series data for three different types of imperceptibility (high, medium and low). Based on our analysis, adversarial training demonstrates a significant reduction in the success rate of attacks. Specifically, the technique reduced fooling rates by an average of 23.73% for high imperceptibility, 31.04% for medium imperceptibility, and a substantial 42.96% for low imperceptibility. These findings highlight the crucial role of adversarial training in enhancing the integrity of deep learning applications.

Sultan Uddin Khan

and 2 more

In the dynamic and ever-changing realm of smart grid technology, it is of utmost importance to guarantee the efficient coordination of Inverse Definite Minimum Time (IDMT) overcurrent and earth fault relays. This coordination is crucial for maintaining the stability and safety of the grid. The conventional approaches to establishing these settings are characterized by their arduous nature and susceptibility to errors, frequently resulting in suboptimal outcomes. The primary objective of this work is to introduce a groundbreaking approach to this essential undertaking through the utilization of machine learning techniques. A comprehensive assessment was conducted on some commonly employed machine learning models, such as Linear Regression, Decision Tree, Random Forest, Support Vector Regression, and Gradient Boosting, to determine their effectiveness in the context of relay coordination. The Gradient Boosting model demonstrated superior performance compared to other models, achieving an R2-score of nearly 97% and exhibiting exceptionally low values for both Mean Square Error (MSE) and Mean Average Error (MAE). This finding suggests a strong alignment with the data and a high capacity to effectively apply the model to unfamiliar data, as evidenced by a Cross-Validation Score of 86.2%. The results of our study indicate that Gradient Boosting presents a highly precise, efficient, and dependable strategy for relay coordination in smart grid systems. Consequently, it emerges as an appealing alternative to conventional calculation-based methods.