Detecting Bias and Delivering Trust in AI
Stories of bias in AI abound: Amazon’s recruiting tool, Apple’s credit card limits, Google’s facial recognition, and dozens more. The quick solution is just to blame the algorithm and its designers for creating a biased model.
However, AI does not create bias alone; it learns from data generated by us–our human systems and behaviors. AI simply exposes and amplifies the existing bias present in whatever decisions it was designed to imitate. We need to reframe the conversation to instead identify AI bias as the first step in building more ethical decision systems.
In this talk, we show how machine learning can make implicit bias in decisions diagnosable, correctable, and ultimately preventable in a way that is difficult to replicate in human decision-making, which is opaque and difficult to change. Bias is not new, but AI represents a new, powerful toolset to measure and change it.
The goal of this Learning Session is both to provide a theoretical understanding of bias and fairness, and to give a demonstration of how you can tackle AI bias using tools and insights available in the DataRobot Bias and Fairness Suite. After all, it’s not a question of whether you have bias in your institution, but rather how you plan to handle it.
- DataRobot webinar: How to Stop Worrying and Start Tackling AI Bias
- Blog: How Do You Define Unfair Bias in AI?
- DataRobot public documentation: Bias & Fairness