Machine Learning Interpretability Basics

November 23, 2020
· 1 min read

This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about the DataRobot AI Platform, data science, and more.

Sophisticated machine learning models have a reputation for being accurate yet difficult to interpret; however, you don’t simply have to accept that. In this learning session, we explore interpretability features that help you understand not just what your model predicts, but how it arrives at its predictions.

These tools are important throughout the whole model lifecycle.

  • If you’re developing a model, you can learn which features matter overall and how your model needs improvement.
  • If you’re a stakeholder for a model, you can see the patterns that the model discovered and compare them against domain knowledge and business rules.
  • If you’re using a model in production to help make decisions, you can learn which features were most important in individual cases, and use that as a guide for actionable next steps or interventions.

Regardless of your role, seeing how the model makes its predictions can help you understand and trust it.

More Information

See DataRobot in Action
Watch now
About the author
Linda Haviland
Linda Haviland

Community Manager

Meet Linda Haviland
  • Listen to the blog
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog