How to Understand a DataRobot Model

October 31, 2018
by
· 2 min read

We are entering the era of the AI-driven enterprise, but AI will only be accepted into organizations if it is trusted. In 2016, when AI was dominated by black box technologies, thought leader Thomas Davenport predicted that:

 

Humans will want to know how … technologies came up with their decision or recommendation. If they can’t get into the black box, they won’t trust it as a colleague. 

 

At DataRobot, we also saw this need. So, we built algorithms with human-friendly explanations that can be understood by ordinary business people, and we’ve created a cheat sheet that shows you how to quickly understand a DataRobot model.

Trusting an AI is a matter of understanding how well it does its job and whether it undertakes that job in a sensible manner. But, understanding an AI can be as complex as understanding a human. Just as you would ask questions to a human to learn more about them, DataRobot recommends that you ask questions of your AI to better understand it.

 

These questions can be summarized as:

  1. How accurate is it? When is it most accurate and when is it not so accurate?
  2. What process or pipeline did it follow?
  3. Which data was important?
  4. What patterns were found in the data?
  5. Why did the AI make a particular decision?

 

The answer to each question involves a different type of explanation. Do you want to understand the model or an individual prediction that it made? These are two fundamentally different questions:

Understanding a model is about seeing whether it is accurate, and seeing what patterns the model derived from the data.

Understanding an individual prediction is about seeing why a particular data point resulted in a particular decision.

We need to look at different model insights and diagnostics, depending upon what question we are asking, and that’s why we’ve created this cheat sheet showing you which insights and diagnostics answer each question that you ask. Over the next few weeks we will explain how to use these insights to answer each question you have.

 

Conclusion

There’s no longer any need to settle for black-box models. Interpretable models are available that will explain, in human-friendly terms, why you can trust them. If your AI can’t answer these questions, then it’s time to upgrade to DataRobot for models that you can trust. Click here to arrange for a demonstration of DataRobot’s model interpretability.

 

New call-to-action

 

About the Author:

Colin Priest is the Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog