Build AI You Can Trust
DataRobot Explainable AI helps you understand the behavior of models and inspire confidence in their results. When AI is not transparent, it can be difficult to trust the system and translate it into business outcomes. With Explainable AI, you can easily understand the decision-making process of models and bridge the gap between development and actionable results.
Understand AI Behavior
Explainability spans across the entire DataRobot platform to support users at each step. Global Explanation techniques allow you to understand the behavior of models and how features affect them. Feature Impact tells you which features have the greatest influence on the model. Feature Effects tell you exactly what effect changing a feature will have on the model.
Explain Why a Model Made a Decision
Local Explanations provide row-level explanations for why a model made a prediction. Prediction Explanations tells you which features and values contributed to an individual prediction and their impact. These can be returned during model training or at scoring time.
Dive Deep Into Your Models
DataRobot offers specialized explainability features for unique model types and complex datasets. Activation Maps and Image Embeddings help you understand visual data better. Cluster Insights identifies clusters and shows their feature makeup. Stability shows you how accurate a time series model is over different forecast distances. And that’s just to name a few of our specialized explainability features!
Operationalize with Full Transparency
DataRobot Automated Documentation helps speed up the documentation process for models. Compliance Reports document the important aspects of a model, including methodologies, performance, and more, to help speed up compliance tasks. Deployment Reports document the behavior of a model after it has been deployed and includes sections on data drift, service health, and accuracy.
Understand Models in Production
Explainability continues after a model has been deployed. Using DataRobot MLOps you can monitor a model that has been deployed in a production environment. Data Drift allows you to see if the model’s predictions have changed since training and if the data used for scoring is different from the data used for training. Accuracy enables you to dive into the model’s accuracy over time. Service Health shows you information on the performance of the model from an IT perspective.