Detecting Bias and Delivering Trust in AI

August 17, 2020
by
· 2 min read

This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about DataRobot, AI Platform, data science, and more.

Stories of bias in AI abound: Amazon’s recruiting tool, Apple’s credit card limits, Google’s facial recognition, and dozens more. The quick solution is just to blame the algorithm and its designers for creating a biased model.

However, AI does not create bias alone; it learns from data generated by us–our human systems and behaviors. AI simply exposes and amplifies the existing bias present in whatever decisions it was designed to imitate. We need to reframe the conversation to instead identify AI bias as the first step in building more ethical decision systems.

In this talk, we show how machine learning can make implicit bias in decisions diagnosable, correctable, and ultimately preventable in a way that is difficult to replicate in human decision-making, which is opaque and difficult to change. Bias is not new, but AI represents a new, powerful toolset to measure and change it.

The goal of this Learning Session is both to provide a theoretical understanding of bias and fairness, and to give a demonstration of how you can tackle AI bias using tools and insights available in the DataRobot Bias and Fairness Suite. After all, it’s not a question of whether you have bias in your institution, but rather how you plan to handle it.

More Information

GUIDE
Trusted AI 101

A Guide to Building Trustworthy and Ethical AI Systems

Download now
About the author
Linda Haviland
Linda Haviland

Community Manager

Meet Linda Haviland
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog