DataRobot Humility in AI Resource card BG v.2.0 1

Humility in AI: Building Trustworthy and Ethical AI Systems

August 17, 2020
by
· 2 min read

At DataRobot, we believe humility is a key step in building more trustworthy AI models. What do we mean by humility in AI? We mean that even a model built according to data science best practices may still have particular areas of weakness or vulnerabilities when confronted with new data to score. For example, that data may have quality issues, or may result in a prediction in a region of low confidence for the model. These vulnerabilities or uncertainties in a model’s predictions can be defined mathematically, and that humility then enables the user of an AI system to augment the model’s machine intelligence with human expertise and guidance. 

No model is going to be perfect. But with a little humility and the injection of human intelligence, a model’s predictions and resulting decisions can maintain alignment with the needs of your business process, even in tricky situations. The end goal of enterprise AI is not realized just by accuracy in a model’s predictions, but by guaranteeing the best data-to-value from your AI system. 

In June, DataRobot introduced Humble AI, a new feature in DataRobot that protects the quality of your predictions in situations when the model may be less confident. With Humble AI, users create rules for models in deployment for predictions made in real-time. These rules identify conditions that indicate quantifiably that a prediction may be unsure, and then can trigger actions like defaulting to a “safe” prediction, overriding outlier values, or not making a prediction at all. 

Humility in AI describes in-depth the various ways that the practice of humility in AI can not only protect your enterprise decision-making in real-time, but also provide feedback on your overall process, identifying integrity issues, blindspots, and changing requirements, that can inform iterative improvements to the AI decision system. 

Ebook
Humility in AI: Building Trustworthy and Ethical AI Systems
Download now

About the author
Sarah Khatry
Sarah Khatry

Applied Data Scientist, DataRobot

Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.

Meet Sarah Khatry
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog