Explaining how a machine learning model comes to a certain prediction is intransparent, yet there are many reasons why people want their machine models to be interpretable (trust, “debugging” of the model, legal reasons, etc.). However, there is a method to explain predictions of a machine learning model. The Shapley value is an approach from the game theory which can calculate the distribution of each feature towards a prediction of a datapoint.
In machine learning, the features (=players) work together to get the payout (=predicted value). The Shapley value tells us, how much each feature contributes to the prediction. The goal of this project is to implement the Shapley value in an R package and to provide methods for visualizing the results.