Return to page

BLOG

Shapley summary plots: the latest addition to the H2O.ai’s Explainability arsenal

 headshot

By Parul Pandey | minute read | April 21, 2021

Blog decorative banner image

It is impossible to deploy successful AI models without taking into account or analyzing the risk element involved. Model overfitting, perpetuating historical human bias, and data drift are some of the concerns that need to be taken care of before putting the models into production. At H2O.ai, explainability is an integral part of our ML products . This deep commitment to better machine learning has been built directly into our suite of products enabling data scientists and business users to understand better what their model is thinking.

H2O.ai  has built one of the most extensive suites of capabilities for reviewing machine learning models after being developed. Techniques like Shapley Values , K-Lime, Surrogate Decision Trees, Reason Codes, Partial Dependence Plots, Disparate Impact Analysis, Exportable Rules-Based Systems, and more are currently available as part of the Machine Learning Interpretability ecosystem. This article introduces the latest addition to this toolkit  – Shapley summary plots for original features  that come with Driverless AI ’s latest release(1.9.2.1).  We’ll understand their functioning and usage with the help of a real-life employee attrition dataset.

Shapley Summary plots

Shapley explanations  are a technique with theoretical support that produces consistent global and local feature contributions. In 2016, Lundberg and Lee   proposed the Shapley values   as a united approach to explaining any machine learning model’s output. Shapley values break down a prediction to show the impact of each feature. In other words, these values show us how much each feature contributed to the overall predictions. This is particularly helpful at the local level, where you can see the features’ positive and negative contributions.

The Shapley Summary Plot  in Driverless AI is slightly different from the Shapley values plot   and has additional interactive and drill-down functionality. The Shapley summary plot combines feature importance with feature effects. It shows original features versus their local Shapley values on a sample of the dataset. Technically, it offers an overview of feature importance and what is driving it. Let’s understand it in more detail with the help of a case study.

Case Study: Predicting attrition in a company

Employee attrition is a major problem in many companies these days. A high level of attrition impacts the business and the talent pool of the company. As a result, companies want to minimize the attrition rate and want to be prepared in advance for cases that cannot be retained.

In this article, we’ll use the publicly available IBM HR Analytics Employee Attrition & Performance  dataset. This dataset contains fictional data about employee’s age, department, gender, education level, etc., along with information on whether the employee left the company or not. We will use the dataset to build a classification  model that will predict the employees’ probability of attrition.

This dataset has a total of 35 features (columns) and 1470 employees (rows). Here’s a snapshot of the first few rows and columns of the dataset. 

lh6.googleusercontent.com/_W1bLs-clHdG6NqwvvuouDBi2QDfwywH5SP4aBFlEfHHc88YJBGEUWGUa-KmUqvCLIfkFI5dxEo_ErsqDP0BN78BonsJ97f_2OoZRJvwPVmg_64jQJNLUN-Z5PT4SbLdUM5pa3a3 lh6.googleusercontent.com/_W1bLs-clHdG6NqwvvuouDBi2QDfwywH5SP4aBFlEfHHc88YJBGEUWGUa-KmUqvCLIfkFI5dxEo_ErsqDP0BN78BonsJ97f_2OoZRJvwPVmg_64jQJNLUN-Z5PT4SbLdUM5pa3a3

A snapshot of the dataset

Launching Driverless AI experiment

We’ll load the dataset into a Driverless AI  instance. Since we are interested in finding the attrition, we set ‘Attrition’ as the target column. We’ll specify the accuracy, time, and interpretability settings as 1,1 and 10, respectively. Shapley Summary Plots can be computed at any value of interpretability setting(from 1 to 10). The values here have been shown for demonstration purposes only. Finally, we launch the experiment. When the experiment finishes building, we should see the following dashboard:

lh5.googleusercontent.com/NHmQAc7IILt69SN0dYYn6yZUMfyIF0qNHBZJqfKAhXR57KR93OL9IXFjKijqzN2xEUid7HfCMhzjEUjSXwC4XmzXgn5IByrUzYH5YhlpOYsBCm0jw764mYt2arUmfuoHl6UDQqPg lh5.googleusercontent.com/NHmQAc7IILt69SN0dYYn6yZUMfyIF0qNHBZJqfKAhXR57KR93OL9IXFjKijqzN2xEUid7HfCMhzjEUjSXwC4XmzXgn5IByrUzYH5YhlpOYsBCm0jw764mYt2arUmfuoHl6UDQqPg

A completed Driverless AI experiment

Here we can see a variety of automatically generated features and their importance. A completed Driverless AI model can be interpreted either from the MLI Page — from the top main menu or theCompleted experiments Page . Let’s go with the first option.

Click the MLI  link in the upper-right corner of the UI to view a list of interpreted models, if any. Click on the New Interpretation  button.

Specify the Driverless AI model that you want to use for the interpretation. Once selected, the target column used for the model will be chosen. Also, select the dataset used to train the model since the same dataset will be used for interpretation. Save the settings.

Click on the recipes tab to specify the  MLI recipes  to run. For this article, we shall only select the Shapley summary plot for the original features option. Optionally you can specify any additional Expert Settings for Shapley summary plots when running this interpretation. The user can adjust settings like the sample size of the observations and the number of Shapley value bins. The Shapley summary plot can also show the individual scatter plots for each feature. This option is enabled by default.

Finally, click the Launch MLI  button. 

Interpreting the model

The Driverless AI Model tab is organized into tiles for each interpretation method. In our case, the Shapley summary plot for the original features explainer becomes available.

On clicking the explainer, we get a plot where feature values are binned by Shapley values, and the average normalized feature value for each bin is plotted. The Shapley Summary Plot only shows original features that are used in the Driverless AI model. Note that when feature engineering  is enabled,  this plot calculates the Shapley values using Driverless AI’s  Naive Shapley method . This method takes a transformed feature and equally distributes its importance among the original features that make it up.

lh4.googleusercontent.com/zRGmZerVaJS9s1s8xUyASrUNNcyMRuq8N6k3MMSsBftvGMwS1nN_zdSd7czV88MAU1uvGFTQSXC89EOwb5K_m3hGH_JNeYijxnV-dUpRnGp-zVi0D4c1U3RLTcu9cOFQ0lSO_Llg lh4.googleusercontent.com/zRGmZerVaJS9s1s8xUyASrUNNcyMRuq8N6k3MMSsBftvGMwS1nN_zdSd7czV88MAU1uvGFTQSXC89EOwb5K_m3hGH_JNeYijxnV-dUpRnGp-zVi0D4c1U3RLTcu9cOFQ0lSO_Llg

Shapley Summary Plot for Original features

If you want a larger view of this graph,  here is a link 

The global y-axis gives the feature name and the x-axis gives the Shapley value.  For each feature, the y-axis indicates the number of data points with a given Shapley value. The features are ordered by the global Shapley value importance (using the Naive Shapley method). The legend corresponds to numeric features and maps to their normalized value. In other words, yellow is the lowest value and deep orange is the highest. Categorical features appear grey because the levels don’t have a numerical value like the continuous features. However, we can still see the spread of the Shapley values for categorical features too. We can see the Shapley value, the number of rows in the bin, and the average normalized feature value for that bin on hovering over any data point. 

lh6.googleusercontent.com/rDNTjn5TwtPDjdy9wTxP8l2GaIzvqqk4FsIjRJN2A1SAQGr1J6DeuK-p9YUvEz5K8ReiiQF0w-1wGCX4W3S-925PYzKSwQgefKM_-BGMFNqyj5kmls3Jw8NpzJl61rwRZmmSJgHB lh6.googleusercontent.com/rDNTjn5TwtPDjdy9wTxP8l2GaIzvqqk4FsIjRJN2A1SAQGr1J6DeuK-p9YUvEz5K8ReiiQF0w-1wGCX4W3S-925PYzKSwQgefKM_-BGMFNqyj5kmls3Jw8NpzJl61rwRZmmSJgHB

 

Details available on hover

The Shapley value corresponds to the middle of the bin. The bins are designed so that they are centered around zero meaning there is a Shapley value shown at 0. The above plot outputs the Shapley values for the “True” class. We can easily infer that higher age  tends to decrease the predicted attrition probability. The same is true for the monthly salary  feature. 

On the other hand, a higher value for the distance from home  feature increases the employee’s predicted probability of attrition in the company. 

The height of the bar reflects the number of instances in the bin. Notice that certain features in the chart have “flat & wide” Shapley values distribution (maximum and minimum importance), while others have most of the values-centered above or below zero.  The plot also shows the value of global bias. The global bias is needed to relate the Shapley values to the prediction for each individual. For each individual, the sum of the Shapley values plus the global bias should be the logit of the prediction for that individual.

Individual Scatter plots 

There is more to Shapley summary plots . We may want to delve further into a single feature. Click on numeric features to see a scatter plot of the actual feature values versus their corresponding Shapley values. Note that categorical features do not provide an actual-value scatter plot. The following figure shows a detailed scatter plot of the Age vs. its Shapley values. 

lh6.googleusercontent.com/NDf0z1oAsx7DmBW-Gz0PrQFBzVNtFmjR-Uf0OdjV4ALFuDBn4iEEQbInsdX-KAIOcZctox1eTgXDXEp8P4cVtOak-Ex0cO77K4c_FJ0qe0xL8XBqtaZnob-aErkVLd6NR2l9hHDo lh6.googleusercontent.com/NDf0z1oAsx7DmBW-Gz0PrQFBzVNtFmjR-Uf0OdjV4ALFuDBn4iEEQbInsdX-KAIOcZctox1eTgXDXEp8P4cVtOak-Ex0cO77K4c_FJ0qe0xL8XBqtaZnob-aErkVLd6NR2l9hHDo

A detailed scatter plot of the Age vs. its Shapley values.

The Shapley summary plot can be downloaded in a PNG format. Additionally, logs are also available for download. 

Conclusion

This article introduced the readers to the newly added Shapley summary plots in the Machine Learning Interpretability suite. These summary plots provide more insight into the model predictions by making available interactive and guided plots. These techniques are a step towards making machine learning more explainable. The idea is to help the user understand not only if a model works but how it works.

 

 

 headshot

Parul Pandey

Parul focuses on the intersection of H2O.ai, data science and community. She works as a Principal Data Scientist and is also a Kaggle Grandmaster in the Notebooks category.