ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01391
41
0

The Road Less Traveled: Investigating Robustness and Explainability in CNN Malware Detection

3 March 2025
Matteo Brosolo
Vinod Puthuvath
Mauro Conti
    AAML
ArXivPDFHTML
Abstract

Machine learning has become a key tool in cybersecurity, improving both attack strategies and defense mechanisms. Deep learning models, particularly Convolutional Neural Networks (CNNs), have demonstrated high accuracy in detecting malware images generated from binary data. However, the decision-making process of these black-box models remains difficult to interpret. This study addresses this challenge by integrating quantitative analysis with explainability tools such as Occlusion Maps, HiResCAM, and SHAP to better understand CNN behavior in malware classification. We further demonstrate that obfuscation techniques can reduce model accuracy by up to 50%, and propose a mitigation strategy to enhance robustness. Additionally, we analyze heatmaps from multiple tests and outline a methodology for identification of artifacts, aiding researchers in conducting detailed manual investigations. This work contributes to improving the interpretability and resilience of deep learning-based intrusion detection systems

View on arXiv
@article{brosolo2025_2503.01391,
  title={ The Road Less Traveled: Investigating Robustness and Explainability in CNN Malware Detection },
  author={ Matteo Brosolo and Vinod Puthuvath and Mauro Conti },
  journal={arXiv preprint arXiv:2503.01391},
  year={ 2025 }
}
Comments on this paper