• Blog
  • How to Version Control Your Production Machine Learning Models

How to Version Control Your Production Machine Learning Models

June 25, 2018
by
· 7 min read

This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here.

How to version control your production machine learning models

Machine learning (ML) is about rapid experimentation and iteration, and without keeping track of your modeling history you won’t be able to learn much. Versioning lets you keep track of all of your models, how well they’ve done, and what hyperparameters you used to get there. Versioning is also an important component of AI/ML governance. This post will walk through why data versioning is important, tools to get it done with, and how to version your models that go into production.

Editor’s note: This blog post was last updated on March 26, 2021.

This article covers:

  • The importance of model versioning
  • Applicability to AI/ML governance
  • Versioning tools to get the job done
  • Further reading

The importance of model versioning

If you’ve spent time working with machine learning, one thing is clear: It’s an iterative process. There are so many different parts of your model—how you use your data, hyperparameters, parameters, algorithm choice, architecture—and the optimal combination of all of those is the holy grail of machine learning. But while there is some method to the madness, much of finding the right balance is trial and error. Even the best machine learning engineers working on the most complex deep learning projects still need to tinker to get their models right.

With that in mind, here are some of the reasons why versioning is so important to machine learning projects:

1. Finding the best model

Throughout that iterative process of updating and tinkering with the different parts of your model, your accuracy on your dataset will vary accordingly. In order to keep track of the best models you’ve created and the associated tradeoffs, you need to have a data versioning system in practice.

2. Failure tolerance

When pushing new versions of models into production, they can fail for any number of reasons. You want to update your models to take new data into account or incorporate speed improvements, but it’s tough to be sure how they’ll perform in real time. If you do encounter an issue with a production model, you need to be able to revert quickly to the previous working version.

3. Increased complexity and file dependencies

With traditional software versioning, there are only a couple of types of files to keep track of: Your code, and your dependencies. With machine learning though, things are a bit more complex. First and foremost, you have datasets (typically not part of a normal software deployment). You need to keep track of what data you train and test on, and if that changes over time.

Additionally, saving your models in most of the popular deep learning frameworks results in a file that you need to keep track of. Finally, models are often written in different languages and rely on multiple frameworks, which makes dependency tracking even more important.

4. Gradual, staged deployment

If and when you make significant updates to your production models, those major changes are rarely deployed immediately and in one shot. To ensure failure tolerance and test appropriately, new models are typically rolled out gradually until teams can be sure that they’re working properly. Versioning gives you the tools to deploy the right data versions at the right times.

5. AI/ML governance

A broader theme in all this is that you need effective governance for your machine learning projects.

Model versioning is one component of AI/ML governance, which is the overall process for how an organization controls access, implements policy, and tracks activity for models and their results. Effective governance is the bedrock for minimizing risk to both an organization’s bottom line and to its brand. It’s essential to minimize organizational risk in the event of an audit, but includes a lot more than just regulatory compliance.

Complete and effective governance includes setting access controls for all models in production, versioning all models, creating the right documentation, monitoring models and their results, and implementing machine learning with existing IT policies.

Organizations that effectively implement all components of ML governance can achieve a fine-grained level of control and visibility into how models operate in production while unlocking operational efficiencies that help them achieve more with their AI investments.

Learn more about AI/ML governance and how to implement it.

Versioning tools to get the job done

It’s hard to understate how nascent the field of production machine learning is, and that means the tools supporting this ecosystem are only starting to be fully developed. Here are some of the solutions that practitioners are currently using, and some new entrants too.

1. Git

Git is the versioning protocol used across the board to monitor and version software development and deployment. You might be familiar with GitHub or BitBucket, which are web-based commercial implementations of this open-source tool. Git tracks any changes made to your code and gives you functionality around implementing, storing, and merging those changes. Pretty much everyone uses it in one way or another.

data versioning - Git
Source: xkcd

But alas, Git is not without its issues. In addition to the often perplexing nature of using the actual protocol, it’s missing a lot of the functionality that you need for machine learning (because it wasn’t created for ML!). Git itself doesn’t allow you to track data, changes to model files, and model dependencies. There are extensions that can help, but those solutions are tough to implement and rarely complete.

2. Sandbox environments

Data scientists often rave about Jupyter Notebooks, a sandbox-type environment that lets you run code in cells and insert Markdown in between. Jupyter Notebooks are like writing a book with code in them: You can be detailed about what each cell does, and organize things in a visually pleasing way. Separating code into cells and sections is a viable way to version your different models.

When it comes to deployment and production though, versioning your models in a notebook doesn’t really cut it. Jupyter Notebooks are a tool for exploration and visualization, not for managing dependencies and tracking minute changes to hyperparameters.

3. Data Version Control (DVC)

Data Version Control (DVC) is a Git extension that adds functionality for managing your code and data together. It works directly with cloud storage (AWS S3 or Google GCP) to push your data changes. For example, according to their tutorial, “DVC streamlines large data files and binary models into a single Git environment and this approach will not require storing binary files in your Git repository.” It’s a streamlined version of combining Git with machine learning specific functionality for data management.

data versioning - version control

For a tutorial on how to implement DVC in your project and why it’s so helpful, check out this walkthrough.

4. Commercial solutions

The traditional business wisdom tells us that if there’s a problem, there’s a company. There are a few companies starting out attempting to solve the data versioning problem. Comet.ml is an automatic versioning solution that tracks and organizes all of your team’s modeling efforts. You can easily compare experiments, see the differences in code between two models, and invite team members to collaborate on a project.

5. Platforms as a service

Even once you’ve found a way to manage data versioning during your training and experimentation process, much of the complexity resides in inference: Deploying the right models in the right places at the right times. If you’re using a platform as a service to deploy your machine learning models, it might offer some functionality around data versioning.

Further reading

How to Version Control your Machine Learning task (Towards Data Science): “A component of software configuration management, version control, also known as revision control or source control, is the management of changes to documents, computer programs, large web sites, and other collections of information. Revisions can be compared, restored, and with some types of files, merged.”

Version Control for Data Science (DataCamp): “Keeping track of changes that you or your collaborators make to data and software is a critical part of any project, whether it’s research, data science, or software engineering. Being able to reference or retrieve a specific version of the entire project aids in reproducibility for you leading up to publication, when responding to reviewer comments, and when providing supporting information for reviewers, editors, and readers.”

Managing and versioning Machine Learning models in Python (SlideShare): “Practical machine learning is becoming messy, and while there are lots of algorithms, there is still a lot of infrastructure needed to manage and organize the models and datasets. Estimators and Django-Estimators are two python packages that can help version data sets and models, for deployment and effective workflow.”

Data Version Control: iterative machine learning (KDnuggets): “Today, we are pleased to announce the beta version release of new open source tool — data version control or DVC. DVC is designed to help data scientists keep track of their ML processes and file dependencies. Your existing ML processes can be easily transformed into reproducible DVC pipelines regardless of which programming language or tool was used.”

Guide
MLOps 101: The Foundation for Your AI Strategy
Download Now
About the author
DataRobot

Value-Driven AI

DataRobot is the leader in Value-Driven AI – a unique and collaborative approach to AI that combines our open AI platform, deep AI expertise and broad use-case implementation to improve how customers run, grow and optimize their business. The DataRobot AI Platform is the only complete AI lifecycle platform that interoperates with your existing investments in data, applications and business processes, and can be deployed on-prem or in any cloud environment. DataRobot and our partners have a decade of world-class AI expertise collaborating with AI teams (data scientists, business and IT), removing common blockers and developing best practices to successfully navigate projects that result in faster time to value, increased revenue and reduced costs. DataRobot customers include 40% of the Fortune 50, 8 of top 10 US banks, 7 of the top 10 pharmaceutical companies, 7 of the top 10 telcos, 5 of top 10 global manufacturers.

Meet DataRobot
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog