SHAP (SHapley Additive exPlanations) attributes a model’s prediction for one instance to its input features using[[ Shapely values]]. (Contrast with [[feature importance]] which can only tell you the relative importance of the feature in the model across all input values).
The `shap` package in [[Python]] is used to calculate SHAP values for a `model` and design matrix `X`.
```python
import shap
shap.initjs() # initialize for notebook viz
explainer = shap.Explainer(model)
shap_values = explainer(X) # subset X for faster performance
# Waterfall plot for first observation
shap.plots.waterfall(shap_values[0])
# Stacked force plot
shap.plots.force(shap_values[0:100])
# Absolute mean SHAP
shap.plots.bar(shap_values)
# Beeswarm plot
shap.plots.beeswarm(shap_values)
# partial dependence plots
shap.plots.scatter(shap_values[:, "<col>"])
```
> [!Tip]- Additional Resources
> - [SHAP Playlist](https://www.youtube.com/watch?v=MQ6fFDwjuco&list=PLqDyyww9y-1SJgMw92x90qPYpHgahDLIK&index=1) | A Data Odyssey
> - [Intro to SHAP](https://medium.com/data-science/introduction-to-shap-with-python-d27edc23c454) | Conor O'Sullivan