Welcome to part 2 of our blog series: What is Your AI Thinking? We will explore some of the most promising testing methods for enhancing trust in AI and machine learning models and systems. We will also cover the best practice of model documentation from a business and regulatory standpoint.
Understanding and trust are intrinsically linked, and ideally we want to both trust and understand any deployed AI system. While explanatory techniques are mostly about increasing understanding of AI and machine learning models and systems, model debugging is about enhancing trust in those same systems by testing them in real-life and simulated scenarios. Sensitivity analysis, also known as scenario analysis or “what-if” analysis, is probably the best-known method for testing the behavior of machine learning models.
Sensitivity Analysis – How will your model behave in the next market boom or bust? What if it encounters data it never learned about during its training process? Is it easy to hack or game the AI system you’ve created? Sensitivity analysis helps provide answers to all of these questions the old-fashioned way: by testing these scenarios explicitly. In sensitivity analysis, data is generated that replicates a scenario of interest: a recession, unseen data, or a hacking attempt, and then model behavior on this data is analyzed. If your model is not passing these tests in a way you are comfortable with, send yourself or your team back to the drawing board!
Sociological fairness in machine learning is an incredibly important, but highly complex subject. In a real-world machine learning project the hard-to-define phenomena of unfairness can materialize in many ways and from many different sources. However, there is a practical way to discuss and handle observational fairness, or how your model predictions affect different groups of people. This is known as disparate impact analysis.
Disparate Impact Analysis – Disparate impact analysis is a fairly straightforward method that quantifies your model’s predictions across sensitive demographic segments like ethnicity, gender, disability status or other potentially interesting groups of observations. Disparate impact analysis is also an accepted, regulation-compliant tool for fair-lending purposes in the U.S. financial services industry. If it’s good enough for multibillion-dollar credit portfolios, it’s probably good enough for your project! Also, why risk being called out in the media for training an unfair model? And why not do the right thing and investigate how your model treats people?
Along with a strong global and local understanding of your model and data, trust in its future behavior, and assurances of fairness, model interpretability is also about minimizing financial risk. Large financial services companies have been calculating and documenting information similar to that described above with this goal in mind for years.
Model Documentation – Model documentation is required in some industries but represents a best practice for all. Documentation should include essential information about machine learning models including:
All of this information can be given to a data scientist, internal validators, or external regulators to understand precisely how the model was generated and what to do if it ever causes problems.
Interpretable models, explanations, model debugging, fairness techniques and model documentation are being pursued by researchers and software vendors…today! In Part 3 of this blog series learn how to use H2O Driverless AI to get a jump on your competition by automatically building low-risk, high-accuracy, and high-interpretability machine learning models.
This blog is second in a 3-part series. You can catch the first part here and the third part here .