How Humility Can Help Build More Trust in AI BG v.1.0

How Humility Can Help Build More Trust in AI

August 30, 2021
by
· 3 min read

As part of a blog post series on the topic of building trust in AI, we recently talked about how  DataRobot organizes trust in an AI system into three main categories: performance, operations, and ethics. In each of these categories is a set of dimensions of trust. The purpose of this blog post is to discuss one dimension of trust in the category of Operations: Humility.

Humility as a Dimension of Trust in Operations

An AI system is more than just a model. Using AI requires an infrastructure of software and human management. These pieces of the puzzle make the integration of an AI system into your business process possible. Best practices in the realm of Operations are as pivotal to an AI system’s trustworthiness as the design of the model itself. One key component of Operations is the ability to identify conditions in which a model’s prediction may be uncertain, as not all predictions are made with the same level of certainty. That is exactly what we mean by humility in AI.

Humility Means Recognizing Uncertainty

An AI prediction is fundamentally probabilistic. Contextualizing how confident a prediction is in production enables even more informed decision-making. In AI, it is possible to explicitly and deterministically identify situations in which a model’s prediction will have reduced confidence and to leverage that knowledge to make a better or safer decision. Recognizing and admitting that uncertainty is a major step in establishing trust.

What Does It Mean for an AI Prediction to Have Less Confidence?

We are all accustomed to basing certain decisions on probabilities, such as whether to bring a jacket when there’s a 40% chance of rain, betting odds on sports events, or political contests. AI predictions are similar.

In an AI system, there are a couple ways to understand prediction confidence.

Prediction intervals can be calculated and describe, with a defined confidence, the likelihood the actual value lies within a given range about a prediction. In a classification setting, a prediction is based upon a class probability. This provides an alternate understanding of confidence. For example in binary classification, the raw class probability will be a value somewhere between 0 and 1. The distribution of class probabilities will determine the classification threshold, over which a value of 1 will be assigned. If a prediction is in the area right around a classification threshold, it can be understood to have low confidence.

When Will a Prediction Be Less Confident or Certain?

A prediction might also be less certain when confronting data measurably dissimilar from the data it was trained on. That may mean an outlier was input into one or more of the features. It might also mean a value the model has never seen before, such as a new categorical level, or only rarely has seen, was input.

What Kinds of Interventions Are Needed When a Prediction is Uncertain?

Interventions can fall anywhere on a spectrum of disruptiveness. The least disruptive intervention is to simply log and monitor uncertain predictions. This log can provide insight into improvements that can be made to the system. A user could also be warned that the prediction is uncertain. At the most disruptive, an error can be returned, and/or a manual human operator alerted to intercede.

Conclusion

Not all model predictions are made with the same level of confidence; this is the core concept advanced by AI humility. Recognizing and admitting that uncertainty is part of a prediction can go a long way toward establishing trust in AI.

Demo
AI you can trust
Request a demo
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog