Cybersecurity Threat Detection Through Explainable Artificial Intelligence (XAI): A Data-Driven Framework
DOI:
https://doi.org/10.3126/irjmmc.v6i2.80687Keywords:
decision tree, interpretable machine learning, threat detection, rule-based algorithm, data-driven frameworkAbstract
Cybersecurity increasingly relies on machine learning models for detecting and responding to cyber threats. Many modern machine learning models for cybersecurity are opaque and mostly unexplainable to users, and therefore pose serious challenges to users adopting and trusting these models, especially in high-stakes environments. A "black box" model may output a prediction, such as reporting a threat, without this model being able to provide any meaningful or naive explanation to users. This may understandably frustrate users and security practitioners alike. The purpose of this research study is to introduce an interpretable machine learning methodology with cyber security for integrating Explainable AI (XAI) methods designed to improve an analyst's or team's ability to both operate a threat detection model, and enhance a model, in terms of performance, usability and interpretability. The research produced a data-driven XAI framework rendering decision-making by teams of cybersecurity experts interpretable using the underlying machine learning models. The supported decision-making methods of XAI included using interpretable algorithm (i.e., a decision tree, a rule-based algorithm, LIME. This study also illustrated measurable improvements in accuracy of threat detection using interpretable machine learning models, while providing human-interpretable, legible, and understandable explanations of model predictions. These benefits will aid the process of decision making, reduce response times, improve communication between data science and cybersecurity practitioners the framework uses interactive visualization tools to increase engagement, decrease reliance on black box models, and encourage informed, data-driven security behaviors.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Research Journal of MMC (IRJMMC)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.