Return to page

BLOG

H2O World Explainable Machine Learning Discussions Recap

 headshot

By Navdeep Gill | minute read | April 16, 2019

Blog decorative banner image

Earlier this year, in the lead up to and during H2O World, I was lucky enough to moderate discussions around applications of explainable machine learning (ML) with industry-leading practitioners and thinkers. This post contains links to these discussions, written answers and pertinent resources for some of the most common questions asked during these discussions, and answers to some great audience questions we didn’t have time to address during the live discussions.

Listen To or Watch the Discussions

The first of these discussions were held with Tom Aliff, Senior Vice President for Analytics Solutions Consulting at Equifax. We went over various topics related to the real-world commercial application of explainable ML and also talked about Equifax’s NeuroDecisionTM  interpretable neural network.[1]  To hear more from Tom, check out the discussion by replaying the on-demand webinar.[2] 

The second panel discussion was held at H2O World 2019 in San Francisco. The participants were:

  • Agus Sudjianto, Executive Vice President, Head of Corporate Model Risk, Wells Fargo
  • Marc Stein, Founder and Chief Executive Officer, Underwrite.ai
  • Taposh D. Roy, Manager – Innovation Team, Decision Support, Finance, Kaiser Permanente
  • Rajesh Iyer, Vice President and Head of the AI Center of Excellence, Capgemini

This lively panel discussion covered numerous explainable ML subjects, included perspectives from financial services and healthcare, and answered several thoughtful audience questions. You can check it out on our website under H2O World replays.[3] 

We can’t thank Agus, Marc, Taposh, Tom, and Rajesh enough for their comments, thoughts, and insights. It was great to hear how these respected professionals and leaders are thinking through problems and putting explainable ML to work! We hope you learn as much from their commentary as we did.

Answers to the Panel Questions

Although I was an active moderator in these discussions, I’ve had some more time to think about the panel questions and wanted to answer the main panel questions in this blog post format where it’s easier to share resources and links. The questions, my answers, and public resources are available below.

1.) What is Explainable Machine Learning?

Explainable ML, also known as explainable artificial intelligence (XAI), is a field of ML that attempts to make the inner workings and reasoning behind the predictions of complex predictive modeling systems more transparent. It’s been defined in several ways.

Finale Doshi-Valez and Been Kim gave one of the first and broadest definitions in the field for interpretability or, “the ability to explain or to present in understandable terms to a human.”[4]  So we can see that explanation is part of the broader notion of interpretability. Later Gilpin et al. put forward that, in the context of ML, a good explanation is “when you can no longer keep asking why.”[5]  The Defense Advanced Research Projects Agency (DARPA), i.e. the inventors of the internet, have also done considerable work in XAI and have given a few discussion points on their XAI homepage.[6] 

Like most fields, explainable ML is not without criticism. For an idea of what some find objectionable, see Cynthia Rudin’s Please Stop Explaining Black Box Models for High Stakes  Decisions.[7] 

(This is a well-reasoned critique from one of the brightest minds in ML, but personally, I feel it’s a bit too purist. Post-hoc explanations fit nicely into entrenched business processes, and the science is moving quickly to address some of Professor Rudin’s concerns. Also, many techniques in explainable ML are meant to provide adverse action notices for regulated decisions as mandated by the Fair Credit Reporting Act (FCRA) or the Equal Credit Opportunity Act (ECOA). Such adverse action notices have to be supplied even if the underlying model is directly interpretable.)

2.) What are the business drivers for interpretability of models in your industry? Is it driven by regulation? Are customers asking for insight into automated decisions? Or is interpretability needed mainly for internal validation purposes?

The main drivers in the adoption of explainable ML appear to be:

  • Explaining impactful decision-making technologies to customers, users, and business stakeholders
  • Regulatory compliance [8]
  • Forensics and defenses for ML model cyber attacks
  • Augmenting disparate impact analysis and related techniques for auditing and remediating algorithmic discrimination

Many have asked how the E.U. GDPR will affect the practice of ML in the U.S. I recommend Andrew Burt’s How Will the GDPR Impact Machine Learning  for a brief primer there.[9]  I’ve also come to understand more about how explainable ML can be used for hacking, defending, and forensically analyzing ML models. If that subject is of interest to you, consider having a look at Proposals for Model Security and Vulnerability .[10] 

3.) Do you believe there is a trade-off between the accuracy of machine learning models and the interpretability of machine learning models? It’s something that is spoken and written about frequently, but have you ever seen this trade-off in practice?

Outside of deep learning  (which today is mostly geared toward pattern recognition in images, sound and text), I now agree with professor Rudin when she says, “It is a myth that there is necessarily a trade-off between accuracy and interpretability.”[7]  In my own experience, in constantly-changing, low signal-to-noise problems, like those relating to many human behaviors, you may see incremental improvements in accuracy in static development and test data when using very complex ML models. However, when those same very complex ML models take a long time to be deployed or have to make decisions on new data that is somehow different from their development and test data, the incremental improvements in accuracy can vanish. In many cases, I feel practitioners would be better off using more constrained models and retraining them more often.

If you’re interested to try out constrained ML models, I recommend monotonic gradient boosting machines, now available in the highly scalable, reliable, and open-source H2O-3 and XGBoost packages.[11] ,[12]  Also check out the awesome-machine-learning-interpretability  metalist for more types of directly interpretable, constrained, or Bayesian ML models (and a lot of other explainable ML software and resources).[13] 

4.) Do you see a connection between transparency of machine learning processes and decreased financial or operational risk?

Personally, I see a win-win-win with explainable ML.

Win 1:  It is possible (see NeuroDecisionTM ) to use constrained ML models to attain higher accuracy than traditional regression  approaches and to retain regulator-mandated transparency.1  This higher accuracy should lead to better ROI for commercial predictive modeling endeavors.

Win 2 : Better transparency should translate into lower operational and reputational risk for commercial ML projects. Model’s that practitioners actually understand should be less likely to make giant financial mistakes, get hacked, or be discriminatory.

Win 3:  When combined with other best practices, explainable ML is just the right thing to do. ML can affect people negatively. When combined with disparate impact analysis and model management and monitoring, explainable ML can help ensure models are not accidentally or intentionally harming people.

Of course, there are some losses too. Like many other technologies, explainable ML can be used in helpful or harmful ways and that’s very important to understand. In addition to their capability to hack or attack model APIs, explainable ML techniques can also be used for “fair washing,” or making discriminatory models look non-discriminatory.[14] 

5.) Are there any applications in your industry where interpretability is simply unneeded? Where do you see the greatest need?

In my opinion, in my own field, there is no place where interpretability is not needed. There’s simply too much risk, and interpretable and explainable ML has just become too easy to implement. (See the awesome-machine-learning-interpretability list for dozens of freely-available interpretable and explainable ML software packages.[13] )

ML models can be hacked. ML models can be discriminatory. ML models can be wrong. Moreover, there are bad actors out there looking to make these bad things happen. Explainable ML, along with other best practices, helps us ensure our models are behaving as expected and not harming people.

The greatest need arises anywhere ML is affecting people. The potential negative effects of ML on people is a broader subject than can be addressed in this post, and explainable ML is just one small part of the problem and solution. NP Slagle gives a great introduction to the wider concerns of conscientious data scientists in his essay, On the Responsibility of Technologists: A Prologue and Primer .[15] 

6.) What types of interpretability practices are you using today? White-box models? Reason codes? What are you excited about in the future?

At H2O we use a lot of interpretable modeling, explainable ML, and model debugging techniques including (but not limited to):

  • Disparate impact analysis
  • Individual conditional expectation (ICE) [16]
  • Local interpretable model-agnostic explanations (LIME) [17]
  • Linear models
  • Monotonic gradient boosting machines (GBMs)
  • Partial dependence
  • Residual analysis
  • Rulefit [18]
  • Sensitivity analysis
  • Shapley feature importance [19]
  • Surrogate decision trees

In the future, we are looking forward to combining even more techniques and tools that increase model transparency, trustworthiness, and security, and that decrease disparate impact, to provide our customers and community with a holistic, low-risk, and human-centered ML toolkit.

To learn more about the tools H2O Driverless AI  uses today for explainability and interpretability, please check out our interpretable machine learning tutorial.[31] 

Unanswered Audience Questions

There were so many helpful and insightful audience questions during these discussions we were not able to answer even half during the events. Here are some of the questions we recorded and are able to answer now. If your question is not here, please ask again!

After the model explains its decision, unintentional discrimination is found.  What can we do to retrain the model to remove such discrimination?

The standard best practice for fair lending purposes is to train several models and select the most acceptable model with the least disparate impact. To retrain a model with less disparate impact, look into techniques like Learning Fair Representations .[20]  To lessen the disparate impact in model predictions, look into techniques like equalized odds post-processing.[21] 

Besides ethics and compliance, will interpretability provide actionable insights?

Definitely yes. Consider the related topics of ML debugging and ML security. (There are many other potential application areas as well.) In model debugging, we can use explanations of the model mechanisms, predictions, and residuals to understand errors in our model’s predictions and correct them. In ML security we can use explanations as white-hat and forensic tools to defend against model hacking.

One could also argue interpretability, in general, provides actionable insights. If my ML model is directly interpretable, then its mechanisms might yield the same kind of actionable insights as linear model coefficients and trends.

Can every machine learning model be updated to become explainable?

Model-agnostic explanation techniques can be applied to nearly all types of standard ML models. See Chapter 5 of Christoph Molnar’s Interpretable Machine Learning  for more information about model-agnostic explanation techniques.[22]  Additionally, decision trees, neural networks , and likely other types of models can be constrained to be monotonic, which greatly increases their interpretability.[23] ,[11] ,[12] 

Can I get more info about Wells Fargo’s explainable neural network?

Yes! Check out Explainable Neural Networks Based on Additive Index Models  and Enhancing Explainability of Neural Networks through Architecture Constraints .[24] ,[25] 

Can I get more details about Equifax’s NeuroDecisionTM neural network?

Yes! There is a YouTube video and a patent, Optimizing Neural Networks for Risk Assessment .[26], [1] 

Different interpretation methods come with different assumptions and limitations. How do you evaluate the performance of an interpretability algorithm?

The highest standard is human judgment, as discussed in Towards a Rigorous Science of Interpretable Machine Learning. 4  Because these types of human studies can be prohibitive in terms of cost and time, our team at H2O has recommended some approaches for testing model interpretability with simulated data, comparison to pre-existing explanations, or with data and model perturbation.[27] 

Does H2O XGBoost or GBM have monotonic options?

Yes! Both the H2O and XGBoost variants of GBM in the H2O-3 library support monotonicity constraints using the monotone_constraints hyperparameter.

Does monotonicity need to hold unconditionally?

No, not when we know it does not apply. I was once told that the frequency of commercial flights as a function of age is nonmonotonic. It apparently peaks in a person’s mid-forties and is lower for younger and older people. If this is true, this would be an example of when not to use monotonicity constraints.

In terms of adoption of AI, what is the proper role of interpretability in establishing trust?

This is a philosophical question, but I can give my own perspective. Interpretability and trust are related, but certainly not the same thing. Interpretability alone does not enable trust. In fact, interpretability can decrease trust if the model mechanisms and predictions are found to contradict human domain expertise or reasonable expectations. Interpretability, along with other properties, such as security, fairness, and accuracy, all play into human trust of the ML models.

Is model interpretability and explanation only available in H2O Driverless AI?

In terms of H2O products, H2O Driverless AI probably makes interpretability easier as it can automatically train interpretable models, provides an interactive explanation dashboard, and provides scoring artifacts  for generating explanations on new, unseen data.

However, open-source H2O-3 contains many explanations and interpretability features including linear models, monotonicity constraints for GBM, Shapley explanations for GBM, and partial dependence plots. I also keep a GitHub repo containing interpretable and explainable ML examples using Python, H2O-3, and XGBoost.[28] 

We talk about model explainability so we can rely on the model. Do you care about data explainability so we know model inputs can be relied on?

Yes! The tenants: “Garbage in, garbage out” and “Trust thy data” still hold.

However, I will add two caveats here (and I’m sure there are others).

  • We now understand input data can contain sociological biases that can create a disparate impact in model predictions. Several techniques have been proposed to minimize such information in development data.[29],[30]
  • Complex feature engineering can be a detriment to interpretability and explanation. For best interpretability and explanation results, use only interpretable and explainable input features.

When models have potentially hundreds of features how do you collapse them to the max allowed for adverse action reasons, particularly when using Shapley values?

Great question! While we can’t give compliance advise, I would suggest a procedure along the lines of the following:

  1. Use monotonicity constraints to make reasoning about the contribution of features more straightforward.
  2. Group features together by their actual meaning. For instance, all features regarding repayment status would form one group. To understand this group’s local feature importance for any one individual, use the mean Shapley value of all the features in the group for each individual.
  3. Rank all groups of features and any single features by their Shapley value.
  4. Take the necessary top-k positive groups of features or individual features to generate your adverse action codes.

_________________________________________________________________________

[1]  Introduction to NeuroDecisionTM : https://www.youtube.com/watch?v=HxDHis4btg0 

[2]  Explaining Explainable AI: https://www.brighttalk.com/webcast/16463/346891 

[3]  Machine Learning Interpretability Panel – H2O World San Francisco: https://www.youtube.com/watch?v=di2j5Dy7W9k 

[4] Towards a Rigorous Science of Interpretable Machine Learning : https://arxiv.org/pdf/1702.08608.pdf 

[5]  Explaining Explanations: An Overview of Interpretability of Machine Learning : https://arxiv.org/pdf/1806.00069.pdf 

[6]  Explainable Artificial Intelligence: https://www.darpa.mil/program/explainable-artificial-intelligence 

[7]  Please Stop Explaining Black Box Models for High Stakes Decisions : https://arxiv.org/pdf/1811.10154.pdf 

[8]  In the U.S., explanations and model documentation may be required under at least the Civil Rights Acts of 1964 and 1991, the Americans with Disabilities Act, the Genetic Information Nondiscrimination Act, the Health Insurance Portability and Accountability Act, the Equal Credit Opportunity Act, the Fair Credit Reporting Act, the Fair Housing Act, Federal Reserve SR 11-7, and the European Union (EU) Greater Data Privacy Regulation (GDPR) Article 22.

[9]  How Will the GDPR Impact Machine Learning : https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning 

[10]  Proposals for Model Security and Vulnerability : https://www.oreilly.com/ideas/proposals-for-model-vulnerability-and-security 

[11]  Gradient Boosting Machine (GBM): http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/gbm.html 

[12]  Monotonic Constraints: https://xgboost.readthedocs.io/en/latest/tutorials/monotonic.html 

[13]  Awesome-machine-learning-interpretability: https://github.com/jphall663/awesome-machine-learning-interpretability 

[14]  Fair Washing: The Risk of Rationalization : https://arxiv.org/pdf/1901.09749.pdf 

[15]  On the Responsibility of Technologists: A Prologue and Primer : https://algo-stats.info/2018/04/15/on-the-responsibility-of-technologists-a-prologue-and-primer/ 

[16]  Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation : https://arxiv.org/pdf/1309.6392.pdf 

[17]  “Why Should I Trust You?” Explaining the Predictions of Any Classifier : https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf 

[18]  Predictive Learning via Rule Ensembles : http://statweb.stanford.edu/~jhf/ftp/RuleFit.pdf 

[19]  A Unified Approach to Interpreting Model Predictions : https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf 

[20]  Learning Fair Representations:  http://proceedings.mlr.press/v28/zemel13.pdf 

[21]  Equality of Opportunity in Supervised Learning : https://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.pdf 

[22]  Model-Agnostic Methods : https://christophm.github.io/interpretable-ml-book/agnostic.html 

[23]  Monotonic Networks : https://papers.nips.cc/paper/1358-monotonic-networks.pdf 

[24]  Explainable Neural Networks Based on Additive Index Models : https://arxiv.org/pdf/1806.01933.pdf 

[25]  Enhancing Explainability of Neural Networks through Architecture Constraints : https://arxiv.org/pdf/1901.03838.pdf 

[26]  Optimizing Neural Networks for Risk Assessment : https://patents.google.com/patent/WO2016160539A1/en 

[27]  Testing Model Explanation Techniques : https://www.oreilly.com/ideas/testing-machine-learning-interpretability-techniques 

[28]  Interpretable Machine Learning with Python: https://github.com/jphall663/interpretable_machine_learning_with_python 

[29]  For instance, Optimized Pre-Processing for Discrimination Prevention : http://papers.nips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention.pdf 

[30]  For instance, Certifying and Removing Disparate Impact : https://arxiv.org/pdf/1412.3756.pdf 

[31]  Machine Learning Interpretability Tutorial : https://h2oai.github.io/tutorials/machine-learning-interpretability-tutorial/ 

  

 headshot

Navdeep Gill

Navdeep is an Engineering Manager at H2O.ai where he leads a team of researchers and engineers working on various facets of Responsible AI. He also leads science and product efforts around explainable AI, ethical AI, model governance, model debugging, interpretable machine learning, and the security of machine learning. Navdeep previously focused on GPU-accelerated machine learning, automated machine learning, and the core H2O-3 platform at H2O.ai.