Why Trust Matters in AI BKG

Why Trust Matters in AI

September 30, 2021
· 3 min read

We can all agree that AI has the potential to help businesses, organizations, and society solve real problems. But there are still many concerns about the consequences of using AI improperly. Things like ethics, privacy, bias, and security are top of mind. In order for AI projects to be fully embraced, companies must address these concerns, because people need to trust their AI in order for projects to be successful. In this installment of our blog post series, we will explore the question of why trust is an essential component to any conversation around AI.

Performance, Operations, and Ethics

Trust in AI is multidimensional. AI creators, operators, and consumers all have different needs and different factors that they consider when evaluating and determining if an AI application is trustworthy. For example, with a consumer-facing application, the requirements of trust for the business department, who created and owns the AI app, are very different from those of the consumer who interacts with it potentially on their own home devices. To satisfy the needs of different stakeholders, it can be helpful to organize trust in an AI system into three main categories: 

  • 1. Trust in the performance of an AI/machine learning model
  • 2.  Trust in the operations of an AI system. 
  • 3.  Trust in the ethics of the workflow, both to design the AI system and how it is used to inform a business process. 

In each of these three categories, we identify dimensions of trust that help define them more tangibly. Combining each of the dimensions together holistically constitutes a system that can earn your trust. Another blog post in this series entitled, How to Build Trust in AI, does a deep dive into all 13 of the dimensions but, for this post, we’ll focus on answering the question of why trust in AI matters. 

AI Success Hinges on Trust

As noted in a 2020 European Commission whitepaper, “The current and future sustainable economic growth and societal well being increasingly draws on value created by data.” The whitepaper goes on to say that “AI is simply a collection of technologies that combine data, algorithms and computing power.” This means AI has the potential to deliver many benefits to society, such as improved health care, fewer breakdowns of household machinery, safer and cleaner transport systems, and better public services. For business, it can help foster a new generation of products and services in areas like machinery, transport, cybersecurity, agriculture, the green economy, healthcare, and high value-added sectors like fashion and tourism. For public interest, it can help reduce the costs of providing services

by improving the sustainability of products and by equipping law enforcement authorities with the best tools to ensure the security of citizens.


Society has a lot to gain by embracing AI. But before it can reap those rewards, people have to trust AI. While lack of investment in AI and a skills gap are holding back full realization of AI’s potential, lack of trust is the main factor holding back a wider adoption of AI. Therein lies the answer to the question of why trust matters in AI and how it is the cornerstone of AI success.   

AI you can trust
Request a Demo
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog