The stories of bias in AI are everywhere: Amazon’s recruiting tool, Apple’s credit card limits, Google’s facial recognition, and dozens more. The quick solution is just to blame the algorithm and its designers. However, the only way to create fairer AI is to understand the true source of the model’s bias.
AI does not create bias alone; it exposes the latent bias present in the humans who created it. We need to reframe the conversation around bias in AI to instead identify it as the first step in building a more ethical, fairer system.
In this talk, we show how machine learning can highlight the implicit bias of a human institution. Bias becomes diagnosable, correctable, and ultimately preventable in a way that cannot be replicated in human decision-making, which is opaque and difficult to change. Bias is not new, but AI represents a new toolset to measure and change it.
The goal is not only to provide a theoretical understanding of bias, but a practical plan that you can implement right away to improve your AI development and heighten your trust in AI. After all, it’s not a question of whether or not you have bias in your institution, but how you plan to handle it.