DataRobot PartnersUnify all of your data, ETL and AI tools in our open platform with our Technology Partners, extend your cloud investments with our Cloud Partners, and connect with DataRobot Services Partners to help you build, deploy or migrate to the DataRobot AI Platform.
This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about DataRobot, AI Platform, data science, and more.
The Deployments tab provides you with a dashboard list of all the deployments you have; this includes those that you deployed, as well as deployments shared with you by others. By deployment we mean a model that DataRobot is tracking to allow you to effectively monitor and manage the model performance.
Figure 1. Dashboard
A deployment here in the DataRobot MLOps environment represents one of three different kinds of underlying models, DataRobot, External, or Custom Inference:
Figure 2. Three types of models for deployments
The first is a model built and deployed from within DataRobot AutoML or AutoTS. Specifically, these are models built after you upload your data and hit the Start button. You request predictions from these models through the DataRobot API.
Figure 3. DataRobot model
The second is an external model that you build outside, and then upload into DataRobot. Similarly, you also request predictions from these custom models through the DataRobot API.
Figure 4. Custom model Add New
The third is a remote model hosted in your own environment, but remotely communicating with DataRobot servers. In this case, you install the DataRobot MLOps agent software that acts as a bridge between your application and DataRobot. You request predictions from your model as you would normally, but then pass your prediction output to the agent, which then reports your prediction data back to DataRobot so that the deployment can capture that information.
Figure 5. External model
In all three cases, your deployment captures the predictions that the underlying model makes, along with the actual outcomes from those predictions once collected and uploaded. And in all three cases, the Deployment user interface provides you with 1) a view into how the nature of your input data changes, 2) the distribution of the predictions from the model changes, and 3) how the accuracy changes over time.
On the main Deployments page, across the top of the inventory, a summary of the usage and status of all active deployments is displayed, with color-coded health indicators.
Figure 6. Summary of status for active deployments
Beneath the summary is an individual report for each deployment. There are two unique deployment lenses that modify the information displayed in the dashboard:
The Prediction Health lens summarizes prediction usage and model status for all active deployments.
The Governance lens reports the operational and social aspects of all active deployments.
To change deployment lenses, click the active lens in the top right corner and select a lens from the dropdown or click the left or right arrow.
Starting with the Prediction Health lens, next to the name of the deployment you see color-coded icons that represent the level of health; this refers to the number of errors for the ServiceHealth column and the degree of degradation or shift in incoming data for DataDrift and Accuracy columns.
Figure 7. Deployment health indicators
Then, there are few metrics on prediction activity traffic displayed.
Figure 8. Governance lens
Let’s now switch the Governance lens. Importance indicates the model traffic volume, financial impact, and other measures of value. The build environment indicates the environment in which the model was built. Then there’s information about the owner and age of the model, and a Humility monitor, which reports on the model making uncertain predictions. Lastly, there’s a menu of options available to manage the model.
To view all of this information in detail, simply click on the deployment you want to view.