Explaining Deployments Dashboard in DataRobot

October 16, 2020
· 3 min read

This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about DataRobot, AI Cloud, data science, and more.

The Deployments tab provides you with a dashboard list of all the deployments you have; this includes those that you deployed, as well as deployments shared with you by others. By deployment we mean a model that DataRobot is tracking to allow you to effectively monitor and manage the model performance.

Figure 1. Dashboard
Figure 1. Dashboard

A deployment here in the DataRobot MLOps environment represents one of three different kinds of underlying models, DataRobot, External, or Custom Inference:

Figure 2. Three types of models for deployments
Figure 2. Three types of models for deployments

The first is a model built and deployed from within DataRobot AutoML or AutoTS. Specifically, these are models built after you upload your data and hit the Start button. You request predictions from these models through the DataRobot API.

Figure 3. DataRobot model
Figure 3. DataRobot model

The second is an external model that you build outside, and then upload into DataRobot. Similarly, you also request predictions from these custom models through the DataRobot API.

Figure 4. Custom model Add New
Figure 4. Custom model Add New

The third is a remote model hosted in your own environment, but remotely communicating with DataRobot servers. In this case, you install the DataRobot MLOps agent software that acts as a bridge between your application and DataRobot. You request predictions from your model as you would normally, but then pass your prediction output to the agent, which then reports your prediction data back to DataRobot so that the deployment can capture that information.

Figure 5. External model
Figure 5. External model

In all three cases, your deployment captures the predictions that the underlying model makes, along with the actual outcomes from those predictions once collected and uploaded. And in all three cases, the Deployment user interface provides you with 1) a view into how the nature of your input data changes, 2) the distribution of the predictions from the model changes, and 3) how the accuracy changes over time.

On the main Deployments page, across the top of the inventory, a summary of the usage and status of all active deployments is displayed, with color-coded health indicators.

Figure 6. Summary of status for active deployments
Figure 6. Summary of status for active deployments

Beneath the summary is an individual report for each deployment. There are two unique deployment lenses that modify the information displayed in the dashboard:

  • The Prediction Health lens summarizes prediction usage and model status for all active deployments.
  • The Governance lens reports the operational and social aspects of all active deployments.

To change deployment lenses, click the active lens in the top right corner and select a lens from the dropdown or click the left or right arrow.

Starting with the Prediction Health lens, next to the name of the deployment you see color-coded icons that represent the level of health; this refers to the number of errors for the Service Health column and the degree of degradation or shift in incoming data for Data Drift and Accuracy columns. 

Figure 7. Deployment health indicators
Figure 7. Deployment health indicators

 Then, there are few metrics on prediction activity traffic displayed.

Figure 8. Governance lens
Figure 8. Governance lens

Let’s now switch the Governance lens. Importance indicates the model traffic volume, financial impact, and other measures of value. The build environment indicates the environment in which the model was built. Then there’s information about the owner and age of the model, and a Humility monitor, which reports on the model making uncertain predictions. Lastly, there’s a menu of options available to manage the model.

To view all of this information in detail, simply click on the deployment you want to view.

More Information

Search the DataRobot Public Platform Documentation for Deployment inventory and Model Registry.

Public documentation
Find All the Information to Succeed with DataRobot and Machine Learning
Learn More
About the author
Linda Haviland
Linda Haviland

Community Manager

Meet Linda Haviland
  • Listen to the blog
  • Share this post
    Subscribe to DataRobot Blog
    Thank you

    We will contact you shortly

    Thank You!

    We’re almost there! These are the next steps:

    • Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
    • Click the confirmation link to approve your consent.
    • Done! You have now opted to receive communications about DataRobot’s products and services.

    Didn’t receive the email? Please make sure to check your spam or junk folders.

    Newsletter Subscription
    Subscribe to our Blog