AI Simplified: Model Interpretability

November 7, 2019
by
1 min

What good is a machine learning model if you don’t understand what it is, how it works, or how to leverage it? Bill Surrette, Data Scientist at DataRobot, explains the importance of model interpretability in this AI Simplified video.

“Any time we build a model to represent a complex, real-world situation, we as data scientists want to make sure we understand what the model is doing, but we also need to be able to explain the model to our business partners and stakeholders.” — Bill Surrette, Data Scientist

Bill describes the role and history of the linear model, walks us through a specific example of predicting readmissions at a hospital, and teaches us to look at models differently to ensure that the best one is used for your specific business needs.

Watch the full video below:

Learn more from the AI Simplified series:

New call-to-action

About the author
0 4
Ashley Smith

Meet Ashley Smith
Share this post
Subscribe to our Blog

Thanks! Check your inbox to confirm your subscription.

Thank You!

We’re almost there! These are the next steps:

  • Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
  • Click the confirmation link to approve your consent.
  • Done! You have now opted to receive communications about DataRobot’s products and services.

Didn’t receive the email? Please make sure to check your spam or junk folders.

Close

Newsletter Subscription
Subscribe to our Blog