AI Bias BG v.1.0

Bias in AI

August 25, 2021
by
· 4 min read

In a recent blog, we talked about how, at DataRobot, we organize trust in an AI system into three main categories: trust in the performance in your AI/machine learning model, trust in the operations of your AI system, and trust in the ethics of your modelling workflow, both to design the AI system and to integrate it with your business process. In each of these categories is a set of dimensions of trust. The purpose of this blog post is to discuss one dimension of trust in the category of ethics: bias and irness. 

This can be a challenging topic to navigate. For one thing, there’s mathematical complexity around identifying the presence of bias in data. There are also the social implications of determining what it means to be “fair.” Still, for AI to matureinto a trusted tool that can advance equality and fairness in our decision-making processes, we must detect, analyze, and mitigate algorithmic bias in models.

What Does It Mean for an AI Model to Be “Biased”?

While fairness is a socially defined concept, algorithmic bias is mathematically defined.

A family of bias and fairness metrics in modeling describes the ways in which a model can perform differently for distinct groups in your data. Those groups, when they designate subsets of people, might be identified by protected or sensitive characteristics, such as race, gender, age, and veteran status.

Where Does AI Bias Come From?

The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Bias might also be a product not of the historical process itself but of data collection or sampling methods misrepresenting the ground truth. Ultimately, machine learning learns from data, but that data comes from us—our decisions and systems.

How Do I Measure AI Bias?

There are two general ways to think about disparities in performance for groups in your data.

  • Measuring fairness of error: this ensures the model performs with the same accuracy across groups so that no one group is subject to significantly more error in predictions than another. 

The predictive parity bias metric looks at the raw accuracy score between groups: the percentage of the time the outcome is predicted correctly.

  • Measuring fairness of representation: this means that the model is equitable in its assignment of favorable outcomes to each group.

The proportional parity bias metric satisfies fairness of representation. For example, you can use it to ensure that automated resume reviews accept both female and male candidates 25% of the time.

It’s possible to introduce conditions to modify the fairness metric. For example, conditional proportional parity might dictate that you’d like to ensure that male and female candidates who have the same number of years of experience clear automated resume review at the same rate.

Yet another metric called the false positive and negative rate parity measures the false positive and false negative rate of the model across each class.

Now, many of these metrics are mutually incompatible. Generally, for example, you cannot satisfy fairness by representation or fairness by error at the same time. So it has to be an active decision what metric to prioritize as you turn to mitigation. Multi-stakeholder analysis to identify downstream impacts, in addition to the input of legal or compliance teams, is pivotal to the process.

How Do I Mitigate AI Bias?

Sometimes it is possible to turn to data curation and feature engineering to mitigate bias. Bias can be a product of a dataset with skewed representation of certain groups; if it’s possible to revisit data collection and start with a more balanced sample, that can reduce the impact of bias. If bias is tied to proxy features in your data, you can identify those proxy features by building a model to predict the protected class of interest. Features with strong predictive power are likely proxies, and through removal or feature engineering, bias can also be mitigated.

However, in some cases, more advanced technical mitigation techniques will be needed. You can address bias in AI systems at three phases of the modeling workflow: preprocessing, in-processing, and post-processing. Preprocessing refers to mitigation methods applied to the training dataset before a model is trained on it. Altering weights on rows of the data to achieve greater parity in assigned outcomes is one example. In-processing refers to mitigation techniques incorporated into the model training process itself. Post-processing methods work on the predictions of the model to achieve the desired fairness. 

Conclusion

Bias and fairness in AI modeling need not be a mysterious monster. Understanding what it means for an AI model to be biased is the first step. With that knowledge, the next step is understanding where that bias came from. The largest source of bias in an AI system is the data it was trained on. Understanding how to measure the bias ultimately enables opportunities to mitigate it and to develop more trust in AI models.

Demo
AI you can trust
Request a Demo
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog