Laying an MLOps Foundation: 7 Key Requirements
Over the past several years, Machine Learning Operations (commonly referred to as MLOps, or ModelOps) has gained a growing following. With the rise of AI, many organizations recognize the need for a dedicated set of people, processes and technologies to scale AI across the enterprise.
That’s where MLOps comes in. Like DevOps for software development, MLOps helps organizations realize the value of machine learning by empowering them to deploy models more simply and automatically, manage the model production lifecycle, and monitor and govern their models in production.
The Emerging Need for MLOps
AI within the enterprise has gone from experimental to downright essential in just a few short years. In 2019, Forrester reported that just over half of global data and analytics decision makers have implemented, or are in the process of implementing, some form of artificial intelligence. Within five years, that number will approach 100%. Those who do not adopt AI will risk becoming irrelevant or worse.
Because of this widespread adoption and anticipated success, business leaders are beginning to shift from asking, “What are the use cases we can tackle with AI?” to “How can we tackle more use cases with AI?” Once C-suite leaders begin to see savings, organizational and competitive benefits, as well as measurable ROI from AI projects, they’re likely to want to do more. That’s when a conversation around true adoption and scale of AI/ML across the enterprise typically starts taking place. This means that it’s only once companies have been successful at crossing the deep chasm between the machine learning promise and actually implementing a handful of use cases that they start thinking about how to make the process repeatable and governed.
Challenges at Scale
Scaling AI across the enterprise is, unfortunately, easier said than done. Oftentimes, lack of communication between the data science teams creating machine learning models and IT operations teams creates roadblocks, if not dams. If those teams have channels of communication, mutual sets of languages and terminologies, and lines of delineation established, then implementation of machine learning models into production might no longer be out of reach.
These challenges give rise to the need for people, processes, and tools dedicated to operationalizing AI and ML. After all, no trained model is a solution on its own. The model only becomes a solution once it’s implemented and is supporting or feeding an actual business application.
7 Key Requirements
As organizations begin to implement processes to operationalize machine learning, there are seven key requirements to prioritize in any MLOps strategy:
- Provision infrastructure resources needed for the ML lifecycle.
- Support multiple types of machine learning models created by different tools.
- Support software dependencies needed by models.
- Monitor models to make sure the models are performing, abiding, and doing no harm.
- Offer deployment freedom: on-premise, cloud, and edge.
- Govern models to maintain lineage, explanations, auditability, and business outcomes.
- Retrain production models on newer data using the data pipeline, algorithms, and code used to create the original.
These strategy components require buy-in across functions and disciplines across the company, most notably data science teams, ITOps, DevOps, and department leadership. AI can only work at scale with the appropriate processes in place and collaboration between creators and consumers of machine learning models.
Watch the below webinar, which dives deeper into these seven requirements and includes tips for pitching MLOps to executives — a key prerequisite to organizational transformation.