Explainable AI for Cybersecurity Threat Detection: Interpreting and Visualizing a Hybrid CNN-LSTM Anomaly Detection Model

Authors

  • Abdur Rehman Lecturer at Department of Software Engineering, University of Malakand Email: abdurrehman@uom.edu.pk
  • Jehangir Muhammad Khan Lab Engineer at Department of Software Engineering, University of Malakand Email: engr.jehangirkhan@uom.edu.pk
  • Syed Muhammad Iqtidar Shah Lecturer at Department of Software Engineering, University of Malakand Email: syedmuhammadiqtidarshah@gmail.com

DOI:

https://doi.org/10.63163/jpehss.v3i3.785

Keywords:

Explainable AI (XAI), Cybersecurity, Anomaly Detection, Deep Learning, CNN-LSTM, SHAP, Model Interpretability, Threat Intelligence

Abstract

The growing complexity of cyber threats has inevitably led to the uptake of sophisticated AI models, especially the deep learning, to identify anomalies in network traffic. Although such models can be quite accurate, their black-box nature is essentially a major challenge to security analysts who need to be convinced about the alerts to evoke effective incident response. To achieve this, the proposed paper will suggest an eXplainable AI (XAI) framework aimed at demystifying a hybrid Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) network that was trained to detect network intrusions. Our framework combines three methods that are complementary, i.e. SHapley Additive exPlanations (SHAP) to attribute the importance of features, (2) Gradient-weighted Class Activation Mapping to temporal sequences (Grad-CAM) to emphasize salient time steps, and (3) a new prototype-based explanation that attempts to compare anomalous events to a nearest normal baseline. We use CIC-IDS2017 dataset to assess our approach. The findings indicate that hybrid CNN-LSTM model has a high F1-score of 98.7%. More to the point, the XAI framework offers actionable and multi-faceted visual descriptions that can be conveniently understood by security analysts 42 percent faster in a controlled user study than the conventional anomaly alerts. It is highlighted in this work that model interpretability is not just an additional feature but a precondition that the successful implementation of AI in security operations centers (SOCs) relies on.

Downloads

Published

2025-09-15

How to Cite

Explainable AI for Cybersecurity Threat Detection: Interpreting and Visualizing a Hybrid CNN-LSTM Anomaly Detection Model. (2025). Physical Education, Health and Social Sciences, 3(3), 84-100. https://doi.org/10.63163/jpehss.v3i3.785

Similar Articles

1-10 of 721

You may also start an advanced similarity search for this article.