How to Understand a DataRobot Model
We are entering the era of the AI-driven enterprise, but AI will only be accepted into organizations if it is trusted. In 2016, when AI was dominated by black box technologies, thought leader Thomas Davenport predicted that:
“Humans will want to know how … technologies came up with their decision or recommendation. If they can’t get into the black box, they won’t trust it as a colleague.”
At DataRobot, we also saw this need. So, we built algorithms with human-friendly explanations that can be understood by ordinary business people, and we’ve created a cheat sheet that shows you how to quickly understand a DataRobot model.
Trusting an AI is a matter of understanding how well it does its job and whether it undertakes that job in a sensible manner. But, understanding an AI can be as complex as understanding a human. Just as you would ask questions to a human to learn more about them, DataRobot recommends that you ask questions of your AI to better understand it.
These questions can be summarized as:
- How accurate is it? When is it most accurate and when is it not so accurate?
- What process or pipeline did it follow?
- Which data was important?
- What patterns were found in the data?
- Why did the AI make a particular decision?
The answer to each question involves a different type of explanation. Do you want to understand the model or an individual prediction that it made? These are two fundamentally different questions:
Understanding a model is about seeing whether it is accurate, and seeing what patterns the model derived from the data.
Understanding an individual prediction is about seeing why a particular data point resulted in a particular decision.
We need to look at different model insights and diagnostics, depending upon what question we are asking, and that’s why we’ve created this cheat sheet showing you which insights and diagnostics answer each question that you ask. Over the next few weeks we will explain how to use these insights to answer each question you have.
There’s no longer any need to settle for black-box models. Interpretable models are available that will explain, in human-friendly terms, why you can trust them. If your AI can’t answer these questions, then it’s time to upgrade to DataRobot for models that you can trust. Click here to arrange for a demonstration of DataRobot’s model interpretability.
About the Author:
Colin Priest is the Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.
VP, AI Strategy, DataRobot
Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
How MLOps Enables Machine Learning Production at ScaleMarch 23, 2023· 4 min read
How the DataRobot AI Platform Is Delivering Value-Driven AIMarch 16, 2023· 4 min read
Trusted AI as a culture and practice is difficult at any level. The focus in this blog will be on the processes that our stakeholders utilize to create structure, repeatability, and standardization.
This is a three part blog series in partnership with Amazon Web Services describing the essential components to build, govern, and trust AI systems: People, Process, and Technology. All are required for trusted AI, technology systems that align to our individual, corporate and societal ideals. This first post is focused on making people across organizations successful with building and implementing…
Artificial intelligence, machine learning, and deep learning are powerful technologies that have created endless possibilities. They contribute to AI chatbots, have brought us self-driving cars, and help translate documents into human-like speech in dozens of languages. For many narrow applications, we have become accustomed to superhuman performance, such as in the case of AlphaGO beating the world’s best human player,…