Generating Explanations for Machine Learning Based Malware Detection Using SHAP
In recent years, researchers have been analyzing the effectiveness of machine learning models for malware detection. These approaches have ranged from methods such as decision trees and clustering methods, to more complex approaches like support vector machines and neural networks. It is relatively well accepted that for most use cases in this domain, neural networks are the superior approach. This, however, comes with a caveat. Neural networks are notoriously complex, therefore, the decisions that they make are often just accepted without questioning why the model made that specific decision. The black box characteristic of neural networks have challenged researchers to explore methods to explain neural networks and their decision making process. In this work, we deploy the SHAP explainable machine learning approach on a collection of machine learning methods to show why these models make the decisions that they do and which features are the main contributors to these decisions.