Bias and Fairness hero banner

Bias and Fairness

Test your models for bias and fix issues before they impact performance.


AI That Shares your Ethics and Values 

The AI explosion has reached a fever pitch across dozens of industries. Machine learning models are becoming more ubiquitous in the real world, with widespread impact on customers, consumers, and the general public. But this explosion often has a negative side. As AI proliferates we are starting to see that many of these models contain unintended bias. For example, hiring models that discriminate.

At DataRobot, we want users to understand how their AI models and data behave and know if they contain bias. Our unique Bias and Fairness tools test your models for bias, and help you perform root cause analysis to identify the likely source. This allows you to fix issues before they materialize and make appropriate trade-offs between model bias and accuracy.

Screen Shot 2021 05 25 at 3.54.27 PM

Pick the Best Fairness Metric for The Use Case at Hand

Bias and fairness testing starts with declaring your protected features. These are features you want to test if the model is exhibiting biased behavior against. DataRobot leverages five different industry standard fairness metrics you can use to check for model bias depending on your use case. To help determine which fairness metric is meaningful to your use case, DataRobot offers a guided questionnaire with natural language explanations to help you select the most appropriate metric.

Test for Bias and Understand Root Cause

Once training is complete, DataRobot offers a variety of insights to see if your models behave in a biased manner toward protected features you declared for your dataset. The Per-Class Bias insight lets you view the model’s fairness score for each class within each protected feature. This helps you understand the degree of bias your model exhibits. The Cross-Class Data Disparity insight then helps you understand the root cause of any bias exhibited. You do this by looking at the data disparity of any feature between the protected and unprotected classes to understand the differences. Finally, the Bias vs Accuracy comparison allows you to see the trade-off between accuracy and bias for many different models.

Proactive Bias Monitoring Your Production Models

After your model is deployed, MLOps allows us to view the Per-Class Bias plot for each day since we created the deployment. At the beginning of the deployment, the model might not be biased. However, over time it can start to exhibit bias behaviour that was not necessarily detected at training time. MLOps allows you to monitor your models using the same fairness metric you selected when you built the model and alerts you if the model falls below the threshold you set. If bias is detected you can then use the Data Drift insight to understand if new data differs from the data used in training to help identify the root cause.

Start Delivering Trusted and Ethical AI Now