DataRobot Common Challanges of Trusting Your AI Hero Banner V2.0

Common Challenges of Trusting Your AI

October 26, 2021
by
· 3 min read

AI is everywhere. From our phones to our living rooms, factory floors to financial institutions. Still, one of the biggest obstacles to even more widespread acceptance of AI is lack of trust. DataRobot defines the benchmark of AI maturity as AI you can trust. The highest bar for AI trust can be summed up in the following question: What would it take for you to trust an AI system with your life? This blog post will explore some of the biggest obstacles to earning trust in AI systems.

Dimensions of trust

DataRobot organizes the concept of trust in an AI system into three main categories: performance, operations, and ethics. Within these categories are thirteen dimensions of trust that, when seen holistically, constitute a system that can earn your trust. The dimensions are discussed at length in another blog post, but here, the focus is on the most common obstacles to earning trust in AI systems.

Transparency

As with all relationships, the first step to establishing trust is transparency. But establishing and maintaining transparency is an obstacle for many organizations. Transparency doesn’t necessarily mean overwhelming users with data and alerts. It’s important to understand what level of information a user expects and then deliver a personalized experience that makes them feel comfortable.

Another aspect of transparency with AI systems that may prove challenging is being clear about what a system is doing as it collects data. People may be concerned that systems are “looking” at them. So, the question is: How does an AI system let people know when this information is being collected? Answering this question becomes a balancing act between utility and privacy. But the onus is on the AI system itself to let the individual determine which way that balance swings. 

Explainability

If someone can’t explain how they arrived at a conclusion, then it’s hard to trust them. The same can be said of an AI system. “To trust computer decisions, ethical or otherwise, people need to know how an AI system arrives at its conclusions and recommendations.” While it can be difficult to achieve explainability in complex machine learning systems, DataRobot supports two common methods for explaining individual predictions : XEMP and SHAP. In addition, it enables insights like feature impact, feature effects, and lift charts to understand all aspects of your model..

Bias

Another obstacle to earning trust in AI systems is bias. Organizations are justifiably hesitant to implement systems that might end up producing racist, sexist, or otherwise biased outputs down the line. The fear is that AI systems can’t be trusted because the data they train on are inherently biased. It’s a difficult topic to navigate, both due to the potential complexity of mathematically identifying, analyzing, and mitigating the presence of bias in the data, and due to the social implications of determining what it means to be “fair.” Bias originates in data, but what data to collect and how to collect it are human decisions. To overcome and mitigate bias, designers need to start with the data itself but then look to other techniques like in-processing or post-processing, where modifications can be made to counteract any identified bias. 

Conclusion

Earning trust in AI systems should be a top priority for technologists. If we don’t overcome this hurdle, more widespread adoption of AI will be hindered, and we may never realize the full potential of AI. Tackling the issues of transparency, explainability, and bias will go a long way toward earning more trust in AI. Building trust in AI requires people, process and technology, but it’s a journey that promises to lead to a better, brighter future.

Demo
AI You Can Trust
Request a Demo
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog