Introducing DataRobot Bias and Fairness Testing BG

Introducing DataRobot Bias and Fairness Testing

December 15, 2020
by
· 4 min read

Bias and Fairness Failures in AI

The AI explosion has reached a fever pitch across dozens of industries. AI is being used more and more ubiquitously in the real world, with widespread impact on customers, consumers, and the general public. We’re also beginning to see more stories of unacceptable AI bias surfacing: hiring models that discriminate against women with the same skills as men, facial recognition that performs poorly on people of color, and the now-infamous COMPAS recidivism model that falsely labelled Black defendants as likely to reoffend.

Each failure exacerbates the public scrutiny and cynicism about the ethics of using AI for important decisions that affect human lives. Many believe that AI inevitably creates this bias by its very nature, because algorithms are not capable of sharing or understanding human values. But the reality is that AI does not create this bias on its own. AI exposes the implicit bias present in human systems; it simply mimics and amplifies these human behaviors. For instance, in the case of the hiring model that discriminated against women, it learned from the patterns in hiring decisions made by managers before the model was built.

But it is actually far easier to interrogate and ultimately change an algorithm’s decision-making than it is to change a human’s behavior. At DataRobot, we believe that this means there is an enormous opportunity and responsibility in front of all of us who work in AI. It is possible to use specific data science tools in order to understand bias in decision-making – not just to create more ethical AI, but also to illuminate and ameliorate human biases that we may not be aware of in our systems.

Tackling Bias and Fairness in AI with DataRobot

In 6.3, DataRobot is releasing a new set of fairness tools specifically tailored for evaluating, understanding, and ultimately mitigating bias in AI. 

Select the attributes in your dataset that you want to protect from bias.

Select the attributes in your dataset that you want to protect from bias.

In order to tackle AI bias, you first need to define the attributes along which you want your model to treat individuals fairly. In many industries and countries, these groups are protected by law, and often include gender, race, age, religion, and more. These features can now be selected for bias and fairness testing within DataRobot.

DataRobot helps you choose an appropriate fairness metric for your use case.

DataRobot helps you choose an appropriate fairness metric for your use case

Just like accuracy metrics, there are dozens of different ways to measure fairness, and each definition is suitable for different use cases. For instance, a healthcare provider using AI to prescribe effective medication likely wants to use a different definition of fairness than an HR team trying to ensure that their AI hiring model is fair.

But also like accuracy metrics, a handful of fairness metrics cover the vast majority of bias and fairness use cases. In DataRobot, you can now select one of the seven most common fairness metrics for any use case. And if you’re not sure which definition of fairness is appropriate, we also provide a guided workflow that helps you answer questions about the ethics and impact of your particular use case to direct you towards an appropriate fairness definition. 

We explain each of the questions and our recommendation in order to help you understand why you may want to use one fairness definition over another. We also explain the impact of that specific definition of fairness on the individuals who will be affected by your model.

Build automated insights to help identify and understand bias in your model.

We have three new bias and fairness insights available in DataRobot 6.3.

Per-Class Bias 

Per Class Bias

The Per-Class Bias chart shows you if your model is biased against any of your protected groups based on your selected definition of fairness, and if so, how the model is biased. 

Cross-Class Data Disparity

Cross Class Data Disparity

The Cross-Class Data Disparity chart allows you to dig deeper into your model’s bias and understand why the model is treating your protected groups differently. It allows you to compare the data distribution across your protected groups and figure out where in the data the model may be learning its bias. Additionally, the Cross-Class Data Disparity insight can direct you towards specific bias mitigation strategies that you can apply. For example, you may find that your model treats one protected group differently because your data collection is worse for that group, and it has more missing values for an important feature. This insight can direct you to improve your data collection or sampling methods in order to ultimately mitigate the bias that was uncovered in the underlying data.

Bias vs. Accuracy Leaderboard Comparison

Bias vs. Accuracy Leaderboard Comparison

The Bias vs. Accuracy graph allows you to compare multiple leaderboard models at once so that you can select a model that is both fair and accurate.

Summary

Ultimately, at DataRobot we believe that bias and fairness testing must become a routine and necessary part of AI projects. We are committed to building the tools to make bias and fairness testing accessible so that anyone can concretely and systematically implement ethical AI.

Get Your Hands on Bias and Fairness Testing Today

Bias and fairness testing is part of DataRobot’s AutoML 6.3 release, as well as our managed cloud platform. If you’re an existing DataRobot customer, contact our Customer Support team to request the feature to be enabled for your account. If you are running DataRobot v6.3 on-premise or in a private cloud, your DataRobot account team will help you enable it.

More Information on DataRobot’s Bias and Fairness Testing

You can also visit the DataRobot Community to learn more about DataRobot 6.3 and watch a demo of this exciting new feature.

White Paper
Humility in AI: Building Trustworthy and Ethical AI Systems
Download Now
About the author
Jett Oristaglio
Jett Oristaglio

Data Science and Product Lead

Jett Oristaglio is the Data Science and Product Lead of Trusted AI at DataRobot. He has a background in Cognitive Science, with focuses in computer vision and neuroethics. His primary mission at DataRobot is to answer the question: “What is everything that’s needed in order to trust an AI system with our lives?”

Meet Jett Oristaglio
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog