While many people believe AI can help solve complex problems, can we trust that the AI solutions directing our work and livelihood are rooted in reliable, unbiased data? Do organizations have the proper systems in place to prevent, or quickly address, issues resulting from AI bias? We set out to answer these, as well as other questions focused on the current perception and mitigation of AI bias.
DataRobot surveyed more than 350 U.S. and U.K.-based CIOs, IT directors, IT managers, and development leads who use or plan to use AI to learn how organizations perceive the issue of AI bias and its importance, which issues are thought to be the biggest risks if AI bias is left unchecked, what kind of tools and capabilities are organizations looking for to help them mitigate bias in AI.
The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place. Organizations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted and explainable.