Explainable AI for Malware Classification: Do Analysts Trust What They Can Understand?
APA Citation
Burke, C. & Takahashi, Y. (2024). Explainable AI for Malware Classification: Do Analysts Trust What They Can Understand?. *Journal of Information Security and Applications*. https://doi.org/10.1016/j.jisa.2024.103756
View original paper →What Did This Cybersecurity Research Find?
This cybersecurity AI trust study compared analyst adoption rates for black-box versus explainable ML malware classifiers in 10 SOCs. Cybersecurity analysts presented with explainable AI models (showing which features drove each classification) adopted AI recommendations 37% more frequently than those using black-box models, and made 22% fewer override errors when the AI explanation aligned with their domain knowledge.
Key Findings
- 1Explainable AI models were adopted 37% more frequently than equivalent black-box models
- 2Override errors decreased 22% when AI explanations aligned with analyst domain knowledge
- 3Feature attribution explanations (showing which indicators mattered) were preferred over rule-based explanations
- 4Analysts with 5+ years of experience were more likely to override AI even when explanations were provided
- 5Training analysts on model logic for 2 hours increased adoption by an additional 15%
How Does This Apply to Cybersecurity Careers?
ML engineers building security tools should invest in explainability to drive adoption. SOC analysts evaluating AI tools can request explainability features to improve their own decision quality.
Who Should Read This?
Frequently Asked Questions
What did this cybersecurity research find?
This cybersecurity AI trust study compared analyst adoption rates for black-box versus explainable ML malware classifiers in 10 SOCs. Cybersecurity analysts presented with explainable AI models (showing which features drove each classification) adopted AI recommendations 37% more frequently than those using black-box models, and made 22% fewer override errors when the AI explanation aligned with their domain knowledge.
How is this research relevant to cybersecurity careers?
ML engineers building security tools should invest in explainability to drive adoption. SOC analysts evaluating AI tools can request explainability features to improve their own decision quality.
Where was this cybersecurity research published?
This study was published in Journal of Information Security and Applications in 2024. The DOI is 10.1016/j.jisa.2024.103756. Access the original paper through the publisher link above.
Explore Related Cybersecurity Resources
Was this page helpful?
Get cybersecurity career insights delivered weekly
Join cybersecurity professionals receiving weekly intelligence on threats, job market trends, salary data, and career growth strategies.
Get Cybersecurity Career Intelligence
Weekly insights on threats, job trends, and career growth.
Unsubscribe anytime. More options