Return to page

BLOG

5 Key Considerations for Machine Learning in Fair Lending

 headshot

By Benjamin Cox | minute read | September 21, 2020

Blog decorative banner image

This month, we hosted a virtual panel with industry leaders and explainable AI experts from Discover, BLDS, and H2O.ai to discuss the considerations in using machine learning to expand access to credit fairly and transparently and the challenges of governance and regulatory compliance. The event was moderated by Sri Ambati, Founder and CEO at H2O.ai. 

Check out the 5 key takeaways from the virtual panel.  

1. Disparate impact vs. Disparate treatment 

According to Nick Schmidt, Director and AI Practice Leader at BLDS LLC, “It is really important to get these foundational issues correct early on. And there are essentially two different types of discrimination that are recognized in the law, in credit and housing and employment. One is this treatment. And that’s when we probably all think of when we think about discrimination. It refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral.” 

The other is disparate impact and this is more controversial. The idea that you can have a valid factor, that is predictably valid, reasonable, but the usage of it ultimately leads to a protected class. And by a protected class, I mean, say African Americans, Hispanics, Asians, older people, women… that factor leads to outcomes that are less favorable for that class. So if you look in lending, for instance, almost any lending model, that model is going to give fewer offers to African Americans and Hispanics relative to whites and Asians. It’s a fact, and there are many reasons for that. That is an example of disparate impact and there’s, what’s called the burden-shifting task, and this is using litigation.  

2. Adverse Actions Notice 

Raghu Kulkarni, Vice-President of Data Science at Discover explains, “Adverse action is at the core of most of our financial institution operations when it is dealing with credit lending. There are chances that the applicants might not receive the credit which they have requested. Adverse action is the legal way in which we tell customers: here are the reasons why you received the credit versus letting the population know the reasons why they didn’t receive the credit.”  

He furthers, “From a modeling perspective, or from a domain perspective, it boils down to rank order the top attributes within the model or a combination of model and strategy, which are cost decisions to go against the applicant. 

3. The Use of Shapley 

Patrick Hall, Advisor at H2O.ai & Principal Scientist at bnh.ai affirms that Shapley values are his tool of choice when it comes to explaining machine learning models because they are an effective way to break down an individual machine learning prediction into its components and that there are also lots of good ways to aggregate those individual prediction explanations up into global explanations.  

“Shapley values are probably the leading technique among many techniques for summarizing individual machine learning model predictions. Shapley values are not perfect, but I’d still argue they’re the best especially if you’re using tree-based models and some kind of high stakes use-case”, argues Patrick. 

4. Current State of Interpretable Machine Learning  

Regarding this topic, I believe that the data science industry is working retroactively to build things like explainable boosting models machines and explainable neural networks to make typically black-box models, more white box, and more transparent. You can kind of see what’s going on under the hood, which is a really positive trend we are seeing in the industry. We [H2O.ai] try to build on top of that as much as possible. 

Patrick Hall complements, “We are really coming to the point where the idea of using a traditional black-box machine learning model on a structured dataset is becoming questionable not only from an ethical standpoint but also a secure accuracy standpoint too, so this [machine learning interpretability] is really important.” 

Nick furthers, “And from the fairness standpoint as well. (…) when I talk to regulators, I really stress this because there are so many possible features, so many different algorithms, so many hyperparameters that you can decide on. You end up with this multiplicity of good models where you don’t really know which one you choose. There are a bunch of them, there are a hundred, there are a thousand models that look almost identical. So what do you do? You can then optimize on explainability. You can optimize on fairness and you can get a better model. And so that’s great. That’s a really huge advantage. 

5. Bias in Machine Learning 

Patrick argues, “The safest way to deal with discrimination and in a predictive model is to consider discrimination measurements when you’re doing the model selection. So, so yes, there are fancy new machine learning research approaches that we’d love to talk about. But the simplest thing you can do is just think ‘when I tweak this hyperparameter, does it change my fairness metric when I add this variable end, does it change my fairness metric?’ And then pick a model that tries to minimize disparate impact or discrimination, while also meeting your business objectives. I think that that’s the most practical and safest advice that we can give.” 

Raghu, “I think bias has always been there. It’s not like machine learning invented bias. It’s always been there. Algorithms might exacerbate some of the correlations and might add, or may actually reduce it. So having an analytical framework to actually look at the dataset and then come to a more practical solution would be the way to go.”  

Steven Dickerson, SVP, Chief Analytics Officer at Discover then adds, “If you don’t have the proper framework in place and the right process in place, the chances are the robustness of your model and long term performance of the model is going to be bad as well. All of these things that we’re talking about can also be applied to making sure that you have a long-running robust model. So it’s not just about fairness and bias.”   

Learn more about discrimination and interpretability in the context of fair lending and machine learning. Click here . 

And in case you want to watch the panel recording, click here.    

www.h2o.ai2020/09/Screen-Shot-2020-09-09-at-1.55.42-PM-1024x649.png www.h2o.ai2020/09/Screen-Shot-2020-09-09-at-1.55.42-PM-1024x649.png
 headshot

Benjamin Cox

Ben Cox is a Director of Product Marketing at H2O.ai where he helps lead Responsible AI market research and thought leadership. Prior to H2O.ai, Ben held data science roles in high-profile teams at Ernst & Young, Nike, and NTT Data. Ben holds a MBA from the University of Chicago Booth School of Business with multiple analytics concentrations and a BS in Economics from the College of Charleston.