5 Key Findings from DataRobot’s State of AI Bias Report

December 4, 2019
by
· 3 min read

As AI adoption increases among all business verticals, executives are seeking ways to mitigate risk and understand how AI bias diminishes AI effectiveness or, worse yet, increases company risk.

By now, most business executives recognize the promise that AI can bring to their organizations. In fact, the confluence of data collection, low-cost computing, and open source technology has enabled widespread AI adoption.  Executives realize that swiftly implementing AI will be a differentiator today and become a standard operation in the future. However, as with any emerging technology, risks need to be defined, recognized and mitigated to avoid poor performance. To take a closer look at these issues, DataRobot surveyed more than 350 executives from the most sophisticated technology organizations to better understand their expectations and actions concerning AI bias and how they are making sure that AI’s benefits outweigh the risks

In our survey, The State of AI Bias in 2019, we specifically focus on how executives are dealing with AI bias to ensure that they can maximize business impact and how business leaders expect to reduce technology risks.

Here’s a closer look at five insightful findings from the survey:

Managing AI Trust

(1) Nearly all respondents (93%) expect to invest more in AI bias prevention initiatives in the next 12 months. This indicates that executives are becoming more data fluent and also are recognizing the need for model evaluations, processes, and guardrails to engender AI trust.

(2) A surprising subtext to this finding is that executives place a lot of faith in third-parties, with 48% of executives relying on third-party vendors to execute AI bias prevention initiatives. Expert AI vendors help organizations realize success by stewarding organizations, allowing them to own their  AI from strategy to implementation to monitoring. But using a third-party vendor does not absolve the executive of AI risks. Organizations need to internalize a comprehensive AI strategy including model evaluation, system safety controls, and ongoing monitoring.

Still Using Black Box

(3) More than a third (38%) of the organizations surveyed state that they use black box systems across many company functions and departments. Among these respondents, it may be the case that using black box AI indicates immature technology governance. Immature AI governance invites challenges because the organization has not verified the AI behaves as expected. An organization should ensure that all aspects of AI models are explainable to reduce the chances of AI bias. This is not to say that the AI is fully transparent but instead is explainable.

Are AI decisions intuitive? Is the AI humble when it is uncertain about a decision? In fact, explainable AI is a cornerstone of safely implementing AI before moving a model into production. Keep in mind, explainable AI information differs by persona in a modeling workflow. For example, data scientists use Shapley values to help them explain features, while compliance officers seek information related to data provenance and protected data inputs.

Differences Among Markets

(4) Both U.S. and U.K. executives share a concern for “software bugs” with AI systems (53% & 58% respectively). However, the starkest geographic distinction emerges when executives responded to their concern for unethical uses of an AI system.

(5) Specifically, 21% of U.S. respondents list “unethical uses” as a concern while 32% in the U.K. share this concern. My colleague Colin Priest, VP AI Strategy, correctly points out “ [executive] concerns about AI could relate to the regulatory and cultural state of affairs in each geography. The U.K. has elevated bias and AI trust as federal issues…the U.S., for the most part, only operates against recommended data guidelines.”

It makes sense that leaders are concerned with correct operation and bug-free software, since AI systems impact a large number of business-critical operations. However, it is interesting to see cultural and regulatory differences emerge when surveying AI ethics. AI systems and ethical implications are still culturally grounded, despite the highly technical nature of these systems.

In the end, I am encouraged that executives recognize overcoming AI bias as an important feature that merits wider adoption. Overall, executives are aware that public AI missteps can be damaging. Still, many of the responses presently illustrate a reactionary mindset to AI bias. Executive reactions are culturally defined and still developing but point towards the need for a governed, explainable AI systems to engender trust and realize actual value.

Ready to learn more? Download our Executive Survey on AI Bias here.

New call-to-action

About the author
Ted Kwartler
Ted Kwartler

Field CTO, DataRobot

Ted Kwartler is the Field CTO at DataRobot. Ted sets product strategy for explainable and ethical uses of data technology. Ted brings unique insights and experience utilizing data, business acumen and ethics to his current and previous positions at Liberty Mutual Insurance and Amazon. In addition to having 4 DataCamp courses, he teaches graduate courses at the Harvard Extension School and is the author of “Text Mining in Practice with R.” Ted is an advisor to the US Government Bureau of Economic Affairs, sitting on a Congressionally mandated committee called the “Advisory Committee for Data for Evidence Building” advocating for data-driven policies.

Meet Ted Kwartler
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog