Bias and Fairness as Dimensions of Trusted AI

There are clear ways to approach questions of AI fairness using the data and model, which can enable an internal discussion, and then steps that you can take to mitigate issues of uncovered bias. Learn more below.

Tackling AI Bias

Algorithmic bias has been a growing subject of much discussion and debate in the use of AI. It is a difficult topic to navigate, both due to the potential complexity of mathematically identifying, analyzing, and mitigating the presence of bias in the data and due to the social implications of determining what it means to be “fair” in decision-making. Fairness is situationally dependent, in addition to being a reflection of your values, ethics, and legal regulations. That said, there are clear ways to approach questions of AI fairness using the data and model, which can enable an internal discussion, and then steps that you can take to mitigate issues of uncovered bias.

What Does It Mean for an AI Model to Be “Biased”?

While fairness is a socially defined concept, algorithmic bias is mathematically defined. A family of bias and fairness metrics in modeling describe the ways in which a model can perform differently for distinct groups within your data. Those groups, when they designate groups of people, might be identified by protected or sensitive characteristics, such as race, gender, age, and veteran status.

Where Does AI Bias Come From?

The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Bias might also be a product not of the historical process itself but of data collection or sampling methods misrepresenting the ground truth. Ultimately, machine learning learns from data, but that data comes from us—our decisions and systems.

How Do I Measure AI Bias?

There are two ways to think about disparities in performance for groups in your data. You might want to ensure fairness of error: that is, the model performs with the same accuracy across groups, so that no one group is subject to significantly more error in predictions than another. Alternatively, you might want to ensure fairness of representation: that is, the model is equitable in its assignment of favorable outcomes to each group. 

Within the broad categories of fairness of error and representation, there are specific metrics or individual bias tests that drill down into a more nuanced understanding of what you seek to measure. For instance,  one bias metric called proportional parity satisfies fairness of representation. Proportional parity requires that each group receives the favorable outcome the same proportion of the time: for example, that female and male candidates in automated resume review are each accepted 25% of the time. You can also introduce conditions to modify the fairness metric. To expand on the previous example, conditional proportional parity might dictate that you’d like to ensure that male and female candidates who have the same number of years of experience clear automated resume review at the same rate.

How Do I Measure AI Fairness?

For bias metrics that measure fairness of error, accuracy can be measured in diverse ways.

  • Predictive parity looks at the raw accuracy score between groups: the percentage of the time the outcome is predicted correctly
  • False positive and negative rate parity measures the false positive and false negative rate of the model across each class

It’s important to remember that these metrics measure different benchmarks for a model’s bias. Predictive parity can be successfully achieved even with a poor score of false positive and negative rate parity between groups. It is important to select an appropriate fairness metric that captures the values and impact you want to measure for the use case.

In DataRobot, a Bias and Fairness suite of tools was introduced to the AutoML platform to enable bias and fairness testing as a standardized part of the machine learning workflow.

How Do I Mitigate AI Bias?

You can address bias in AI systems at three phases of the modeling workflow:

  • Preprocessing refers to mitigation methods applied to the training dataset before a model is trained on it. One example is altering weights on rows of the data to achieve greater parity in assigned outcomes.
  • In-processing refers to mitigation techniques incorporated into the model training process itself.
  • Post-processing methods work on the predictions of the model to achieve the desired fairness.
cta module 1920px

Start Delivering Trusted and Ethical AI Now