Mohammed Mynuddin

and 6 more

In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of cyber threats, the need for a highly efficient NIDS has become paramount for ensuring robust network security. Typically, intrusion detection systems utilize either a pattern-matching system or leverage machine learning for anomaly detection. While pattern-matching approaches tend to suffer from a high false positive rate (FPR), machine learning-based systems, such as SVM and KNN, predict potential attacks by recognizing distinct features. However, these models often operate on a limited set of features, resulting in lower accuracy and higher FPR. In our research, we introduced a deep learning model that harnesses the strengths of a Convolutional Neural Network (CNN) combined with a Bidirectional LSTM (Bi-LSTM) to learn spatial and temporal data features. The model, evaluated using the NSL-KDD dataset, exhibited a high detection rate with a minimal false positive rate. To enhance accuracy, K-fold cross-validation was employed in training the model. This paper showcases the effectiveness of the CNN with Bi-LSTM algorithm in achieving superior performance across metrics like accuracy, F1-score, precision, and recall. The binary classification model trained on the NSLKDD dataset demonstrates outstanding performance, achieving a high accuracy of 99.5% after 10-fold cross-validation, with an average accuracy of 99.3%. The model exhibits remarkable detection rates (0.994) and a low false positive rate (0.13). In the multiclass setting, the model maintains exceptional precision (99.25%), reaching a peak accuracy of 99.59% for k-value=10. Notably, the Detection Rate for k-value=10 is 99.43%, and the mean False Positive Rate is calculated as 0.214925.

Sultan Uddin Khan

and 3 more

The utilization of deep learning models has been widely recognized for its significant contribution to the enhancement of smart grid operations, particularly in the domain of power quality disturbance (PQD) classification. Nevertheless, the emergence of vulnerabilities like targeted universal adversarial attacks can significantly undermine the reliability and security of deep learning models. These attacks can exploit the model’s weaknesses, causing it to misclassify PQDs with potentially catastrophic consequences. In our previous research, we for the first time examined the vulnerability of deep learning models to targeted universal adversarial attacks on time series data in smart grids by introducing a novel algorithm that effectively attacks by maintaining a trade-off between fooling rate and imperceptibility. While this attack method demonstrated notable efficacy, it also emphasized the pressing need for robust defensive mechanisms to safeguard these critical systems. This paper provides a thorough examination and evaluation of different defense strategies, specifically adversarial training, defensive distillation, and feature squeezing, in order to identify the most effective method for mitigating targeted universal adversarial (TUA) attacks on time series data for three different types of imperceptibility (high, medium and low). Based on our analysis, adversarial training demonstrates a significant reduction in the success rate of attacks. Specifically, the technique reduced fooling rates by an average of 23.73% for high imperceptibility, 31.04% for medium imperceptibility, and a substantial 42.96% for low imperceptibility. These findings highlight the crucial role of adversarial training in enhancing the integrity of deep learning applications.