Video

AutoML after Deployment: Continuous Model Competitions on Live Data

When creating a model, data scientists evaluate many approaches to find the best solution for their AI challenges. But this rigorous analysis often stops after model deployment. Unexpected conditions can degrade performance and issues can arise when a model works with live data from an external system.

This session teaches you how to use DataRobot to build an automated solution that improves the real-world performance of your models after deployment.

Watch this session to:

  • Understand the factors that affect model performance
  • Learn how to build and evaluate challenger models in DataRobot
  • Develop a cost-effective method to improve model accuracy

Speakers

Tristan Spaulding
Tristan Spaulding

Senior Director, Product Management, DataRobot