MLOps Helps Mitigate the Unforeseen in AI Projects
The latest McKinsey Global Survey on AI proves that AI adoption continues to grow and that the benefits remain significant. But in the COVID-19 pandemic’s first year, many felt more strongly about the cost-savings front than the top line. At the same time, AI remains complex and out of reach for many. For example, a recent IDC study1 shows that it takes about 290 days on average to deploy a model into production from start to finish. As a result, outcomes that drive real business change can be elusive.
Today’s economy is under pressure with inflation, rising interest rates, and disruptions in the global supply chain. As a result, many organizations are seeking new ways to overcome challenges — to be agile and rapidly respond to constant change. We do not know what the future holds. But we can take the right actions to prevent failure and ensure that AI systems perform to predictably high standards, meet our business needs, and unlock additional resources for financial sustainability.
Operational Efficiency with AI Inside
Once you move your model into production, you need to monitor and manage your models to ensure that you can trust predictions and turn them into the right business decisions. You need full visibility and automation to rapidly correct your business course and to reflect on daily changes.
Imagine yourself as a pilot operating aircraft through a thunderstorm; you have all the dashboards and automated systems that inform you about any risks. You use this information to make decisions to navigate and land safely. The same is true for your ML workflows – you need the ability to navigate change and make strong business decisions.
Building AI Trust During Uncertain Market Conditions
Your model was accurate yesterday, but what about today? Conditions can change overnight.
How long will it take to replace the model? How can I get a better model fast? How can I prove the value of AI to my business stakeholders? These and many other questions are now on top of the agenda of every data science team.
Our team worked tirelessly on the MLOps component of the DataRobot AI Cloud platform to provide the experience that allows you to address these and many other challenges associated with model monitoring and trustworthy AI. Here are several enhancements that our team announced recently that I am personally excited about.
Challenger Insights for Multiclass and External Models
One of the MLOps features that consistently impresses customers is Continuous AI and the Challenger/Champion framework. After DataRobot AutoML has delivered an optimal model, Continuous AI helps ensure that the currently deployed model will always be the best one even as the world changes around it.
DataRobot Data Drift and Accuracy Monitoring detects when reality differs from the situation when the training dataset was created and the model trained. Meanwhile, DataRobot can continuously train Challenger models based on more up-to-date data. Once a Challenger is detected to outperform the current Champion model, the DataRobot platform notifies you about changing to this new candidate model.
Business processes probably require you to verify this suggestion. Is this automatically created model actually better, and reliably so, more than the current Champion? To facilitate this decision, DataRobot platform provides Challenger Insights, a deep but intuitive analysis of how well the Challenger performs and how it stacks up against the Champion. This also shows how the models compare on standard performance metrics and informative visualizations like Dual Lift.
Manage changing market conditions. With DataRobot AI Cloud, you can see predicted values and accuracy for various metrics for the Champion as well as any Challenger models.]
Another addition to DataRobot Continuous AI is Challenger Insights for External Models. This means that you can leverage DataRobot MLOps to monitor already existing and deployed models, while DataRobot will construct Challengers in the background. Also, if a DataRobot AutoML Challenger manages to beat the External Model, Challenger Insights allow you to carefully compare your own models against the candidate produced by DataRobot AutoML.
Clearly know when your Challenger beats your Champion. DataRobot Challenger Insights includes a rich set of performance metrics, from standards such as Log Loss and RMSE to the more specialized metrics DataRobot uses for specific problems. Here the DataRobot view shows that the Challenger beats the Champion on some metrics, but not all.
DataRobot offers more in-depth analysis in Challenger Insights, including Dual Lift, ROC and Prediction Differences. In this case, DataRobot shows that the Challenger automatically retrained via AutoML handily beats the Champion on key metrics.
Model Observability with Custom Metrics
To quantify how well your models are doing, DataRobot provides you with a comprehensive set of data science metrics — from the standards (Log Loss, RMSE) to the more specific (SMAPE, Tweedie Deviance). But many of the things you need to measure for your business are hyperspecific for your unique problems and opportunities — specific business KPIs or data science secrets. With DataRobot Custom Metrics, you can monitor details specific to your business..
As a first stage, DataRobot provides training and prediction data access via API and UI. This allows you to compute business KPIs such as expected profit or novel metrics fresh from ML conferences locally to stay up to date on how your models — DataRobot and external — are performing. The DataRobot platform will iterate on this and over time make it extremely convenient and fast to monitor the metrics vital to your business.
Embrace Large Scale with Confidence
As organizations see more value from AI, they want to apply it to more use cases. Take also a volume of predictions. If, for example, you have a model that predicts warehouse capacity for one store, what about capacity globally? What if we can add more segments and conditions to these? Does your system handle billions of predictions and ensure that your models are trustworthy and data is secured?
Act locally, but think globally. Maybe you are at the beginning of your journey, and have a few models into production, but time is flying, you have to be one step ahead. DataRobot supports companies at different stages of the AI maturity, so we learned from our customers what is needed to need to build your AI systems in scalable motion.
Autoscaling Deployments with MLOps
DataRobot includes a new workflow that enables the ability to deploy a custom model (or algorithm) to the Algorithmia inference environment, while automatically generating a DataRobot deployment that is connected to the Algorithmia Inference Model (algorithm).
When you call the Algorithmia API endpoint to make a prediction, you’re automatically feeding metrics back to your DataRobot MLOps deployment — allowing you to check the status of your endpoint and monitor for model drift and other failure modes.
Large-Scale Monitoring for Java
Are you making millions of predictions daily or hourly? Do you need to ensure that you have a top-performing model in production without sharing sensitive data? Now you can aggregate prediction statistics much faster while controlling the governance and security of your sensitive data — no need to submit their entire prediction requests to DataRobot AI Cloud Platform to get data about drift and accuracy monitoring.
New DataRobot Large Scale Monitoring allows you to access aggregated prediction statistics. This feature will compute some DataRobot monitoring calculations outside of DataRobot and send the summary metadata to MLOps. It will let you independently control the scale. This strategy allows handling billions of rows per day.
Learn More About DataRobot MLOps
DataRobot is building the best development experience and best productionization platform that meet both your organization’s needs and real-world conditions.
Every enhancement is an additional step to maximize efficiency and scale your AI operations. Learn more about DataRobot MLOps and access public documentation to get more technical details about recently released features.
1IDC, MLOps – Where ML Meets DevOps, doc #US48544922, March 2022
2IDC, FutureScape: Worldwide Artificial Intelligence and Automation 2022 Predictions, doc #US48298421, October 2021