MLOps makes model deployment easy. Operations teams, not data scientists, can deploy models written in a variety of modern programming languages like Python and R onto modern runtime environments in the cloud or on-premise. Users of the MLOps system don’t have to know any of these technologies to drag and drop a model into the system, create a container, and deploy the model to a production environment.
MLOps gives you monitoring that is designed for machine learning. Monitoring includes service health, data drift, model accuracy, and proactive alerts that are sent to stakeholders using a variety of channels like email, Slack, and Pagerduty. With MLOps monitoring in place, your teams can deploy and manage thousands of models, and your business will be ready to scale production AI.
Models need to be updated frequently and seamlessly. MLOps model lifecycle management supports the testing and warm-up of replacement models, A/B testing of new models against older versions, seamless rollout of updates, failover procedures, and full version control for simple rollback to prior model versions.
MLOps governance provides the integrations and capabilities you need to ensure consistent, repeatable, and reportable processes for your models in production. Key capabilities include access control for production models and systems, including integration to LDAP and role-based access control systems (RBAC), as well as approval flows, logging, version storage, and traceability of results for legal and regulatory compliance.
See What MLOps Can Do for Data Engineers
DataRobot MLOps allows data engineers to manage cutting edge predictive models in an efficient and value-driven way.
Three Key Feature Sets
Unleash the ability to work with different types and shapes of data that serve your needs.
- Real-time predictions
- Batch predictions
- Service health monitoring
- Time series predictions
- Image and geospatial data types
- Java scoring code
- Portable docker image
Operating at Scale
Use and build upon the foundation you already have.
- Monitoring diverse prediction environments
- Audit logs
- Versioning and lineage
- Change approval workflows
- No-code prediction GUI
- Value and use case tracking
- Repo integration
Making Machine Learning Trustworthy
Deploy reliable, trustworthy, and unbiased models.
- Data drift analysis
- Accuracy analysis
- Anomaly warnings
- Prediction explanations
- Champion/Challenger gates into production
- Humble AI – built in mechanisms ensuring trust in your models
- Prediction intervals
The Only Scalable MLOps Architecture
I really think using DataRobot MLOps is the reason why we didn’t have to stress about it [COVID] as much as other companies have. The only reason we were comfortable in doing that is that when we see performance changes via MLOps we can throw everything automatically back into DataRobot AutoML and see what it tells us in terms of model comparison and see what we need to do based on where we’re at at that point of time.
With MLOps, we were able to deploy both DataRobot and non-DataRobot models within minutes rather than weeks, enabling us to achieve a far faster time to value than with homegrown deployments. In addition, the monitoring capabilities ensure that our models are generalizing appropriately to new data. We have so far had 100% uptime on our deployments.