DataRobot The Feedback Loop How Humility in AI Impacts Decision Systems Background V1.0

The Feedback Loop: How Humility in AI Impacts Decision Systems

September 11, 2020
by
· 3 min read

Humble AI is a new feature in DataRobot that protects the quality of your predictions in situations where the model may be less confident. With Humble AI, users create rules for models in deployment for predictions made in real-time. These rules identify conditions that indicate quantifiably that a prediction may be unsure, and then can trigger actions like defaulting to a “safe” prediction, overriding outlier values, or not making a prediction at all. 

While Humble AI is a new concept, it is clear that it holds tremendous value for decision systems, in particular, real-time decision systems, where each decision is an urgent and high-stakes one. But before we look at the value to business decision-making, let’s briefly go over the three most common frameworks around AI and human intelligence working together as parts of a single integrated decision system.  

table1

When it comes to choosing a level of automation for any given system, you have to consider the resulting trade-off. On one end of the spectrum, you have unbridled automation that approaches maximum efficiency, but it comes at the  at the cost of losing the guardrails provided by humans looped into the process. If your risk appetite is low or you’re working in a highly regulated environment, human-out-of-the-loop systems may pose a serious business risk. 

However, in some scenarios, like real-time ad bidding, it’s impossible to have a human in the loop. These systems run high volumes of predictions, with decisions made in a fraction of a second. And this is all done based on a head-spinning amount of anonymized user information and immediate behavioral patterns of each particular user. A lot of such digital advertising systems can work effectively only when they don’t have a human in the loop. When the decision window is immensely small, and the decisions are extraordinarily complex, how do you mitigate risks? 

This is where the concepts of human-over-the-loop and humility in AI come into play. Human-over-the-loop systems can balance automation and human participation by enabling the system to perform in the fully automated mode with human intervention when needed.

Currently available model monitoring capabilities, like those available with DataRobot MLOps, support a long-term oversight framework by continuously collecting scoring data, predictions, and actual outcomes to compare trained and deployed models’ statistical properties. While this can help identify the critical moment when retraining the model is necessary, humility in AI dictates that human intervention should also be an option on the level of an instantaneous individual prediction. There are a few primary questions in this scenario. 

How do you recognize when exactly humans should be involved? This depends on the confidence level around the model’s prediction. The confidence level could be gauged by monitoring:

  • Uncertainty around predictions: Predicted values are outside of the range of expected values
  • Outlying outputs: There are numeric features in the scoring data dissimilar to what the training model ingested
  • Low observation regions: A categorical feature value is the one the user has specified as unexpected or inappropriate

But that’s just one piece of the puzzle. The next key step is how the system should respond to a specific lower confidence trigger. There are certain actions a system could perform in these cases: 

  • No operation: Monitor how often the rule’s condition is met without affecting predictions at all. You can use it to check if the condition was selected correctly and is not triggering too often or too rarely.
  • Overriding prediction: Dictate the prediction regardless of the model’s output to a specified value. It can enable you to force business rules on your deployment since you control the predicted value. It may be that you specify a “safe” value for prediction, ensuring that when the model is unsure of the identified condition, the action taken introduces the least risk.
  • Returning an error: Discard the prediction completely.

With DataRobot MLOPs, these capabilities are baked into the Humble AI feature. But you can read our latest ebook, Humility in AI, to explore other facets and ways to support humility in an AI system, regardless of your platform. 

Ebook
Humility in AI: Building Trustworthy and Ethical AI Systems
Download now
About the author
Sarah Khatry
Sarah Khatry

Applied Data Scientist, DataRobot

Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.

Meet Sarah Khatry
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog