How Do You Define Unfair Bias in AI?

May 15, 2019
by
· 6 min read

Art is subjective and everyone has their own opinion about it. When I saw the expressionist painting Blue Poles, by Jackson Pollock, I was reminded of the famous quote by Rudyard Kipling, “It’s clever, but is it Art?” Pollock’s piece looks like paint messily spilled onto a drop sheet protecting the floor. The debate of what constitutes art has a long history that will probably never be settled, there is no definitive definition of art. Similarly, there is no broadly accepted objective definition for the quality of a piece of art, with the closest definition being from Orson Welles, “I don’t know anything about art but I know what I like.

Blue Poles by Jackson Pollock

Similarly, people recognize unfair bias when they see it, but it is quite difficult to create a single objective definition. That’s because there are key considerations that vary from case to case. Many have attempted to define fairness for algorithms, resulting in multiple candidate definitions. Arvind Narayanan, an associate professor at Princeton, lists 21 different definitions for fairness in an algorithm and concludes that there is no single definition that applies in all cases. In this blog post, rather than defining a unique solution, I will list the four key questions that you need to answer in order to derive a definition of unfair bias that matches your particular needs.

Which Attributes Should be Protected?

Your AI must obey the law. Most countries have discrimination laws, under which it is unlawful to treat a person less favorably on the basis of particular legally protected attributes, such as a person’s sex, race, disability, or age. However, there is no universal set of protected attributes that apply under all circumstances, in all countries, for all uses. For example, it may be illegal to discriminate on the basis of gender for employment hiring purposes but legal to charge different prices for different ages.

But there’s more to unfair bias than obeying the law. You must also consider reputation risk and ethics. An organization may have data containing sensitive attributes, such as a person’s health records or personal activities, whose values the person does not want to be disclosed and may not want to be used in decision making. Reputation risk arises when your AI is using sensitive features in a manner that society considers inappropriate, such as using data that was originally collected for a different purpose. A reputation risk exists when the public revelation of a behavior or process would potentially cause embarrassment or loss of brand value. Ethical issues arise when your AI is using sensitive features, often in situations involving vulnerable groups, asymmetries of power, or when AI is being used for purposes that are dishonest or not in society’s interests.

Equal Opportunity or Equal Outcome?

Equal opportunity is a state of fairness in which humans are treated similarly, regardless of their protected features. This concept is closely related to disparate treatment, which is defined as unequal behavior towards an individual or group relating to a protected feature (i.e., treating them as if we were blind to their protected features). Equal opportunity is often a regulatory requirement, particularly in employment law. The philosophy behind equal opportunity is that people should not be penalized for protected attributes that are not related to the decision being made (e.g., the best person should be hired for the job, regardless of their race). Problems with disparate treatment can arise in AIs when they are trained on historical data that contains human biases or when minority groups are insufficiently represented in the data.

Equal outcome is a concept of fairness in which humans, grouped by their protected features, have similar outcomes on average, regardless of the protected features. This concept is closely related to disparate impact and adverse impact, which is defined as practices that adversely affect one group of people with a protected attribute more than another, even in situations where the rules applied by employers or landlords are formally neutral (i.e., the rules don’t explicitly involve the protected attribute but do use attributes that are correlated with the protected attribute). Equal outcomes may be a regulatory requirement (e.g., California mandates women on corporate boards), or an organizational strategy (e.g., seeking diversity). The philosophy behind equal outcomes is threefold:

  1. proxies for protected attributes should not be used to avoid regulatory fairness regulations
  2. a person should not be penalized for historic disadvantage
  3. diversity is good for organizations

Problems with disparate impact can arise in AIs when they are trained on historical data that contains human biases, when minority groups are insufficiently represented in the data, or when the prevalence of negative outcomes varies between groups within the population. It is common for individuals to have unfair outcomes even when the group achieves equal outcomes.

Note that it in real-life data, it is not possible to achieve both equal outcomes and equal opportunities. The two goals are mathematically incompatible. You will need to choose which definition of fairness to apply to your task.

Groups or Individuals?

Fairness can be measured at group levels or at the individual level. Do you wish to ensure that on average you don’t discriminate against a protected group or apply the protection to each and every individual? For example, you may hire female job applicants with the same average probability that you hire male applicants. That would achieve group fairness. However, you may be biased by giving junior roles with higher probability to females and senior jobs with higher probability to males. Achieving individual fairness is more difficult than achieving group fairness. It takes more time and effort to ensure that each and every individual is treated no differently according to their protected attributes. Algorithmic AI decision making (decisions made by a machine without human intervention) may make individual fairness easier to achieve than in the past because you can do what-if analysis and objectively check whether replacing protected attributes (and related proxies) changes the decision. This was not possible in the past when humans made those decisions.

The choice of group fairness versus individual fairness is related to the decision of whether to target equal opportunity or equal outcomes. If you choose to target equal outcomes, then you cannot ensure individual fairness.

Note that achieving individual fairness ensures that you have achieved group fairness.

Direct or Indirect Bias?

Direct bias is when you directly use a protected attribute to make a decision. This is relatively easy to achieve in an AI. If your AI does not use the protected attribute in its decision making, then it has successfully avoided direct bias. In DataRobot, you can check for direct bias by viewing the feature impact, the feature effects, and the prediction explanations.

Indirect bias can occur when the value of other attributes is correlated with the value of a protected attribute (e.g., a person’s height is related to their age, gender, and race). A person may be indirectly discriminated against if such a proxy attribute is used to make a decision about them. This can occur subtly and is much more difficult to discover and remove than direct discrimination. The solution is a four-stage process involving

  1. finding correlated attributes,
  2. measuring the effects of correlated attributes,
  3. deciding whether those correlated attributes are true effects or mere proxies, and
  4. removing the proxy attributes, or transforming the data to negate the unfair bias.

DataRobot provides human-friendly insights that make the process of discovering and addressing indirect bias quicker and simpler.

Conclusion

There’s more to building an AI than just predictive accuracy. Unfair bias can cost your organization via suboptimal hiring or marketing, reputation damage, or even legal action. Unfair bias is a complex issue that will require consensus within your organization and which is most effectively managed within the early stages of an AI project.  Before you start your next AI project, ensure that your organization has an AI ethics statement that defines:

  • ethical purposes and uses of AI,
  • which personal attributes an AI can use for decisions and which will be protected,
  • what details will be disclosed to all AI stakeholders,
  • audibility of AI systems, and
  • a process of accountability governance for AI systems.

The path to trusting that an AI is acting fairly includes knowing whether the patterns it is using are suitable and reasonable and knowing which input features the model is using to make its decisions. The latest generation of AI gives you human-friendly explanations of what patterns it uses and why it made a particular decision. If your current AI cannot provide human-friendly explanations, then it’s time to update to DataRobot for AI that you can trust. Click here to arrange for a demonstration of DataRobot, showing how you can trust an AI.

New call-to-action

 

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog