Return to page

BLOG

Bias and Debiasing

 headshot

By Kim Montgomery | minute read | April 15, 2022

Category: Explainable AI, H2O-3
Blog decorative banner image

An important aspect of practicing machine learning in a responsible manner is understanding how models perform differently for different groups of people, for instance with different races, ages, or genders. Protected groups frequently have fewer instances in a training set, contributing to larger error rates for those groups. Some models may produce very different average predictions for different groups. It’s important to be able to measure how model performance differs across groups and understand the drivers for those differences.

Bias can enter a model at different stages of the modeling process.

  • It could be present in the data due to underlying disparities between groups or due to the data gathering process. For instance, differences in income may reflect very real historical income disparities.
  • It could be introduced at the labeling stage, if the labeling process is biased against certain groups. For instance, if historical decisions about startup funding contain biased decisions that decision process could be learned by any model that is trained on that funding data.
  • It can also be introduced at the modeling stage as different models have learned information in different ways. For instance, some models may be better than others at learning examples that may occur more infrequently for the minority group.

The first step in treating bias is determining how to measure the bias in the model. There are different ways to measure bias. Group fairness statistics keep track of differences in how the model treats groups of people on average. Individual fairness measures keep track of whether similar people are treated similarly. For most realistic applications, it will be desirable to consider both group and individual fairness. There are numerous measures of group fairness and no one metric can encompass all aspects of fairness, so it’s best to consider more than one metric in order to understand how groups of people are affected differently by the model. Some common group fairness statistics include Adverse Impact Ratio, False Positive Rates, False Negative Rates, differences in accuracy between groups, and differences in model calibration between groups. Even with group fairness metrics in the acceptable range, it could be that individuals are treated in a way that is less than fair, so it’s important to consider how individuals are rated differently by the model.

If a model is found to have bias, there are many things to consider. Rather than jumping to debiasing methods as a catch-all solution, the best first step is to understand the underlying cause for the bias. If it is caused by the data collection process, it’s likely that gathering new data is the best option, though possibly a costly one. Similarly, if the bias is due to labeling, relabeling the data is likely the best approach. If the data doesn’t directly favor the unprotected group, considering a range of models may help correct the problem.

After understanding the source of the bias, and looking into improvements that directly correct for the source of the bias, it might be a good idea to consider debiasing methods to assist in finding a model that is fairer. There is no standard method for debiasing data. Three general categories of debiasing methods are classified based on the stage of the modeling process in which they intervene.

Preprocessing debiasing methods seek to alter the data in order to reduce bias. The simplest preprocessing is fairness through unawareness, simply leaving out the protected group labels. It’s almost always the case that other features contain information about the protected groups, so fairness through unawareness alone is unlikely to be adequate to significantly reduce bias. The next most complicated preprocessing technique is to remove other features containing information about the protected group. This can be done using methods like infogram1 , or by removing the features most predictive of the protected group. Reweighing and sampling methods attempt to adjust the data such that the minority group has better representation. Other preprocessing methods look for a transformation of the features which contains less information about the protected groups while preserving accuracy. Some preprocessing methods such as feature selection  and reweighing have the advantage that it’s straightforward to understand how the original data is being changed.

In-processing methods attempt to build fairness into the modeling process using an objective function that includes both accuracy and fairness terms. For instance, in FAIRXGBoost, extra terms can be added to the gradient, hessian, and objective function to attempt to reduce bias by reducing the correlation between the target and the protected group.2

Postprocessing methods alter the output of a model to decrease bias. They have the advantage that they are not model specific and can be applied to any model.

Debiasing can be a useful tool, but it’s important to have a complete understanding of the effects of the debiaser. Below are some things to consider when evaluating a debiasing technique.

  • What effect does it have on group fairness statistics? It’s best to look at more than one fairness metric and think about which metrics are important for the problem.
    • Consider the change in group fairness statistics for the model with or without debiasing.
    • Check the accuracy of the model before and after debiasing. If there is a trade-off between accuracy and fairness, is it reasonable?
    • Consider the changes in average prediction with and without debiasing for both the protected and unprotected groups.
    • Consider how the feature importance changes with and without debiasing. The shift in feature importance may lead to insights into which features were contributing to bias.
  • What effect will the method have on individuals?
    • Look at some of the individuals who had decision changes before and after debiasing. Are they sensible?
    • Look at the magnitude of the prediction changes due to debiasing. Do some people experience large changes and is that acceptable?
    • Consider the number of people from the protected and unprotected group that had an outcome change due to debiasing.
    • Look at the average feature values for people who had decision changes from positive to negative or negative to positive. On average, who is being affected by the debiasing and are the groups affected sensible?

How to Get Started

Want to learn more about Infogram? Check out our H2O-3 implementation and examples .

References

  1. Subhadeep Mukhopadhyay. InfoGram and Admissible Machine Learning, August 2021. https://arxiv.org/abs/2108.07380
  2. FairXGBoost: Fairness-aware Classification in XGBoost, October 2020. https://arxiv.org/abs/2009.01442
 headshot

Kim Montgomery

Kim has a Ph.D. in applied mathematics, with a background in both predictive modeling and differential equations. She has significant experience applying mathematical modeling to problems in the energy industry and in the biosciences. She is a Kaggle grandmaster and has been ranked as high as 15th in the overall Kaggle rankings.  She’s excited to be applying her skills at H2O.ai.