Bias and Fairness hero banner

Bias and Fairness

Test your models for bias and fix issues before they impact performance.

AI That Shares your Ethics and Values 

The AI explosion has accelerated across dozens of industries. Machine learning models are becoming more commonplace in the real world, with widespread impact on customers, consumers, and the general public. But this explosion often has a negative side. As use of AI increases we are starting to see that many of these models contain unintended bias. For example, hiring models that discriminate.

At DataRobot, we want users to understand how their AI models and data behave and know if they contain bias. Our unique Bias and Fairness tools test your models for bias, and help you perform root cause analysis to identify the likely source. This allows you to fix issues before they materialize and make appropriate trade-offs between model bias and accuracy.

Screen Shot 2021 05 25 at 3.54.27 PM
Guide
A Guide to Building Fair and Unbiased AI Systems
Pick the Best Fairness Metric for The Use Case at Hand

Bias and fairness testing starts with declaring your protected features. These are features you want to test if the model is exhibiting biased behavior. DataRobot leverages five different industry standard fairness metrics you can use to check for model bias depending on your use case. To help determine which fairness metric is meaningful to your use case, DataRobot offers a guided questionnaire with natural language explanations to help you select the most appropriate metric.

Test for Bias and Understand Root Cause

Once training is complete, DataRobot offers a variety of insights to see if your models behave in a biased manner toward protected features you declared for your dataset. The Per-Class Bias insight lets you view the model’s fairness score for each class within each protected feature. This helps you understand the degree of bias your model exhibits. The Cross-Class Data Disparity insight then helps you understand the root cause of any bias exhibited. You do this by looking at the data disparity of any feature between the protected and unprotected classes to understand the differences. Finally, the Bias vs. Accuracy comparison allows you to see the trade-off between accuracy and bias for many different models.

Proactive Bias Monitoring Your Production Models

After your model is deployed, MLOps allows us to view the Per-Class Bias plot for each day since we created the deployment. At the beginning of the deployment, the model might not be biased. However, over time it can start to exhibit bias behaviour that was not necessarily detected at training time. MLOps allows you to monitor your models using the same fairness metric you selected when you built the model and alerts you if the model falls below the threshold you set. If bias is detected you can then use the Data Drift insight to understand if new data differs from the data used in training to help identify the root cause.

Minimize Risk through Bias Mitigation

Mitigating bias is critical to valid models. DataRobot simplifies bias mitigation with a no-code, out-of-the-box solution to mitigate bias. You can choose from two bias mitigation workflows. Automatically run DataRobot bias mitigation to add fairness to the top three models found, or choose to manually mitigate biased behavior in any individual model using a feature that you choose. You have the control to clearly understand how bias mitigation was performed and choose which features are being mitigated for bias.

image4
image4

Start Delivering Trusted and Ethical AI Now