Realize the Value of AI at Production Scale with DataRobot 9.0 background image

Realize the Value of AI at Production Scale with DataRobot 9.0

April 20, 2023
by
· 4 min read

Organizations want to scale the use of AI to create value enterprise-wide. This might mean deploying hundreds of ML models to support use cases across the whole company. 

At the same time, the world is constantly changing, which impacts the performance and accuracy of your business critical deployments. Now imagine the impact of these changes on those hundreds of deployed models.

Maintaining multiple models in production and knowing which ones need attention to retain their accuracy, impact, and value in the long term is no easy task. This is why Production is such a vital—and challenging—part of the ML lifecycle.

The DataRobot AI Platform can help. It’s a complete AI lifecycle platform that is collaborative and easy to implement, gets you to value faster, and helps you to easily maintain that value over time.

DataRobot Production capabilities provide a single system record for all your AI artifacts, helping you to manage all production models, no matter who built them, how they were built, or where they are hosted. The Platform unifies your fractured infrastructure, helping you think clearly about your entire model inventory.

With DataRobot 9.0, we are doing even more, by helping you:

  • Clearly calculate and track ML impact and easily communicate ROI.
  • Make deployment automation easier with our new GitHub Actions for CI/CD.
  • Quickly identify and deal with data drift to maintain the value of business critical deployments.

Let’s explore how each of these can help you realize the value of AI at production scale.

Track ML Impact and Value with Customized Metrics for Your Organization

For a long time, the DataRobot AI Platform provided metrics like accuracy tracking. With custom metrics, we have extended this to value-based tracking.

Most organizations struggle to quantify and track the value of their AI projects. Our new custom inference metrics feature, unique to DataRobot, shifts the focus from high-level summary statistics to what matters most for your business.

With DataRobot 9.0, you can embed your own analytics, and from a single place, track traditional metrics like accuracy, as well as all your custom KPIs tied to a model. This gives you a constant, multidimensional view of that model’s impact on your business. 

As soon as any KPI falls below an acceptable threshold, an automated notification will be sent, and you can take appropriate action, such as retraining or replacing a model with a better performing challenger. You’ll continue to drive top- and bottom-line impact and improve the value of DataRobot investments across your organization.

DataRobot deployment metrics

Make Deployment Automation Easier with GitHub Actions for CI/CD

Deploying models and calculating their value is one hurdle, but you also need to maintain that value in production. Our new GitHub Marketplace Action for CI/CD makes sure that you continuously sustain the value you initially created.

Whenever you update your models, Production can automatically trigger, build, test, and deploy a new model iteration into DataRobot, straight from your favorite command line, IDE, or Git tool. This means you can make deployment a completely self-service layer inside your business, and automatically update models fast, without sacrificing control, governance, or observability.

For example, imagine you were tracking business KPIs like cost of error and regulatory fines via custom metrics. If those metrics started to trend in the wrong direction, you could easily replace your model using the GitHub actions CI/CD workflow. This would save you time, ensure lineage and governance of your deployments, and help maintain the business value you expect from your models.

GitHub Actions for CI/CD - DataRobot AI Platform

Produce Better Models with Expanded and Robust Drift Management Capabilities

DataRobot always offered deep drift management capabilities for ML deployments – no matter where or how they were built, or where they are deployed. This helps data scientists visualize, analyze, and share feedback about model drift. Through understanding how models are drifting – and being alerted in a timely manner when a model should be retrained – your organization can better respond to changes in market conditions.

In DataRobot 9.0, we’re taking things further with an expanded suite of drift management capabilities. 

New visualizations help you quickly investigate the context and severity of drift. With a few clicks, you can view drift across multiple features (including text features), compare time periods, and more. The speed and depth at which you can analyze drift means you can take appropriate action before your business is impacted.

A new drift over time feature helps you estimate the severity of drift for each time bucket, while being mindful of any shifts in prediction volumes that should be taken into account. The new Drill Down tab provides a heat map for data drift across your features for each timestep, so you can detect correlated changes across multiple features in one view.

image 2

These new features meet the needs of customers who require deeper yet quick analysis and problem solving, so they can manage AI in a rapidly changing world and volatile economy that causes models to drift. They help you identify the scale and depth of investigations to produce better models for retraining, which means you get model check-ins done quickly and can get back to building models.

Accelerate Your Path to Value with DataRobot 9.0

Building a process for a few models is relatively easy. Running a fleet of models in production is a very different prospect. That’s why DataRobot 9.0 continues to make it simpler, seamless and easier for you and your teams to operate and deliver on the value of AI at production scale.

To find out more and see these new DataRobot 9.0 features in action, watch our Generate and Maintain Value of AI at Scale session.

Video
Generate and Maintain Value of AI at Scale
Watch now
About the author
Aditya Shankar
Aditya Shankar

Regional Director, AI Success, APAC

Aditya is passionate about helping clients derive tangible value from their AI initiatives. He is deeply interested in shaping the AI agenda at the highest levels of client organizations, translating it into a practical, well qualified, and sequenced roadmap of AI use cases, which are then executed with clinical precision using DataRobot.

A former Lecturer in Computational Intelligence and Knowledge Engineering at the prestigious National University of Singapore, he earned his advisory stripes at eminent firms such as Boston Consulting Group and PwC.

Based in Singapore, Adi, as he is fondly known, is a sought after speaker and thought leader in all aspects related to the multi-dimensional field of Artificial Intelligence.

Meet Aditya Shankar
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog