Explaining Challenger Models in DataRobot

March 19, 2021
by
· 5 min read

This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about DataRobot, AI Cloud, data science, and more.

Almost certainly, your deployed model will degrade over time. Inevitably, as the data used to train your model looks increasingly different from the incoming prediction data, prediction quality declines and becomes less reliable. Challenger models provide a framework to compare alternative models to the current production model. You can submit challenger models that shadow a deployed model, and then replay predictions already made to analyze the performance of each. This allows you to compare the predictions made by the challenger models to the currently deployed model (also called the “champion” model) to determine if there is a superior DataRobot model that would be a better fit. By leveraging the DataRobot MLOps Agent this capability is available for any production model—no matter where it is running and regardless of the framework or language in which it was built.

Figure 1. Deployment Challengers
Figure 1. Deployment Challengers

To support Challenger models for a deployment, you enable the Challengers option with prediction row storage. To do so, adjust the deployment’s data drift settings either when creating a deployment (from the Data Drift tab) or from the Settings > Data tab for an existing deployment. Prediction row storage instructs DataRobot to store prediction request data at the row level for deployments. DataRobot will use these predictions to compare the champion and challenger models.

Figure 2. Enable challenger models
Figure 2. Enable challenger models

Before adding a challenger model to a deployment, you first build and select the model to be added as a challenger. You can choose a model from the Leaderboard, or you can use your own custom model deployed within MLOps. In either case, the challenger models are referenced as model packages from the Model Registry and must have the same target type as the champion.

Feature lists for the current champion and the challenger models do not need to match exactly. However, to replay predictions the data passed at prediction time should be a superset of all features required for both the champion and challenger models. For example, if the training dataset for the champion was target, feature 1, feature 2, feature 3, and the training dataset for the challenger was target, feature 1, feature 2, feature 4, then the prediction request for replaying predictions should be: target,  feature 1, feature 2, feature 4.

When you have selected a model to serve as a challenger, from the Leaderboard navigate to Predict > Deploy and select Add to Model Registry. This creates a model package for the selected model in the Model Registry, which enables you to add the model to a deployment as a challenger.

Figure 3. Deploy model—add to model registry
Figure 3. Deploy model—add to model registry

Now navigate to the deployment for the champion model, select Challengers, and click Add challenger model to add any challenger model. Choose the model you want from the Model Registry and click Select model package.

Figure 4. Add challenger model
Figure 4. Add challenger model

Figure 5. Select challenger model from model registry
Figure 5. Select challenger model from model registry

As the final step you assign a Prediction Environment for the challenger model, which specifies the resources to use for replay predictions. This allows DataRobot MLOps to avoid impacting production performance by replay predictions in an environment other than where the model is hosted.

Figure 6. Prediction Environment for the challenger model
Figure 6. Prediction Environment for the challenger model

The Deployment Challengers tab shows the following information for the champion and challenger models:

  • Model name
  • Metadata for each model, such as the project name, model name, and the execution environment type
  • Training data
  • Actions menu to replace or delete the model   
Figure 7. Selected champion and challenger models
Figure 7. Selected champion and challenger models

After adding challenger models, you can replay stored predictions made with the champion model. This allows you to compare performance metrics such as predicted values, accuracy, and data errors across each model. To replay predictions, select Update challenger predictions.

Figure 8. Replay predictions
Figure 8. Replay predictions

The prediction requests made within the time range specified by the date slider will be replayed for the challengers. After predictions are made, click Refresh on the time range selector to view an updated display of performance metrics for the models.

Figure 9. Refreshed display of performance metrics
Figure 9. Refreshed display of performance metrics

You can also replay predictions on a periodic schedule instead of doing so manually. Navigate to a deployment’s Settings > Challengers tab. Turn on the toggle to Automatically replay challengers, and set when you want to replay on predictions (such as every hour or every Sunday at 18:00, etc.).

Figure 10. Set schedule for replaying predictions
Figure 10. Set schedule for replaying predictions

Once you have replayed the predictions, you can analyze and compare the results. The Predictions chart (under the Challengers tab) records the average predicted value of the target for each model over time. Hover over a point to compare the average value for each model at the specific point in time. For binary classification projects, use the Class dropdown to select the class for which you want to analyze the average predicted values. The chart also includes a toggle that allows you to switch between continuous and binary modes. 

Figure 11. Predictions chart for champion and challenger
Figure 11. Predictions chart for champion and challenger

The Accuracy chart records the change in a selected accuracy metric value (LogLoss, in this example) over time. These metrics are identical to those used for the evaluation of the model before deployment. Use the dropdown to change accuracy metrics.

Figure 12. Accuracy chart for champion and challenger
Figure 12. Accuracy chart for champion and challenger

The Data Errors chart records the data error rate for each model over time. Data error rate measures the percentage of requests that result in an HTTP error (i.e., problems with the prediction request submission).

For more information on the MLOps suite of tools, visit the DataRobot Community for a variety of additional videos, articles, webinars and more.

More Information

See the DataRobot Public Platform Documentation for Challengers tab.

Documentation
Using Challengers Tab in DataRobot
Learn More
About the author
DataRobot

The Next Generation of AI

DataRobot AI Cloud is the next generation of AI. The unified platform is built for all data types, all users, and all environments to deliver critical business insights for every organization. DataRobot is trusted by global customers across industries and verticals, including a third of the Fortune 50. For more information, visit https://www.datarobot.com/.

Meet DataRobot
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Thank you

    We will contact you shortly

    Thank You!

    We’re almost there! These are the next steps:

    • Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
    • Click the confirmation link to approve your consent.
    • Done! You have now opted to receive communications about DataRobot’s products and services.

    Didn’t receive the email? Please make sure to check your spam or junk folders.

    Close
    Newsletter Subscription
    Subscribe to our Blog