Augment Any AI with Diverse Data Out of the box blog image BG v.1.0

Introducing Multimodal Clustering

December 28, 2021
· 4 min read

People continue to be our most critical asset. The market is changing every day, and the ability to make more decisions, faster, creates a significant difference for each organization. Yes, data created over the next three years will far exceed the amount created over the past 30 years (Source: IDC Worldwide Global DataSphere Forecast, 2020-2024). But there’s only so much an organization can do, as it’s impossible to exponentially scale up the number of decisions a human can make.

This applies to organizations engaging in their AI journey. Everyday we hear about AI projects failing or taking ages to deliver the expected value. Delays mean missed opportunities, lost revenue, damaged customer relationships, and competitive disadvantage. This explains why pressure on Data Science teams is growing every day. Typical questions are: 

  • How to challenge the status quo and deliver business value in a timely fashion? 
  • How can I iterate quickly and show results to my stakeholders?
  • What if my data doesn’t have predictive power? 
  • How can I provide high-quality results in a few hours with a large variety of data? 
  • Can I put all my data into one project without over-engineering? 

A concrete example is when data scientists are given some data and tasked to surface business insights and help guide the next decisioning steps. Clustering is a technique that can be used to get a sense of the data while allowing to tell a powerful story. But it can be a huge challenge, even for an expert code-centric data scientist, to understand the signal captured by the algorithm. It becomes even more complicated when the data consists of diverse data types, such as images, numerics, and text.

There’s a lot of decisions to make.

In DataRobot, starting with the 7.3 release, whether with code or no code, clustering with multimodal data takes the legwork out of the equation, removing the need for the data scientist to make a zillion of technical decisions. As experts, they can spend more time on making the decisions that really matter.

Introducing Multimodal Clustering

With out-of-the-box multimodal clustering, data scientists can quickly deliver insights on demand. With no code, low code, or full code, they can let the machine run a comprehensive experiment and focus on telling a story from the data.

Let me walk you through an example. I took a Utah house price listings dataset, which has a combination of diverse feature types, including numeric, categorical, raw text, images, and geospatial data.

Step 1. Multimodal Clustering Autopilot  

Simply select “no target” and let DataRobot generate various training pipelines and run them against your very specific dataset. That’s the Autopilot mode. Several options and parameters are tested. You have full control over these and you can re-run things with different settings anytime. 

Multimodal Clustering Autopilot DataRobot AI Cloud

Step 2. Deep Dive into Model Insights

When all the jobs triggered by the Autopilot are done, all models generated are listed on the Leaderboard. That’s where you can quickly pick a first model that has a good Silhouette score (which tells you how separated the clusters are) and whether the model has created a reasonable number of clusters.

You can use standard agnostic explainability tools and visualization to see which features are more impactful, and for each feature, how their values are distributed across clusters.

For different types of features (numeric, categorical, text, image, and geospatial).

Cluster Insights DataRobot AI Cloud

Here’s an example with an image feature. For image features, you can go deeper and take a look at image embeddings and activation maps to literally see what the model is paying attention to when making a cluster assignment decision. Each feature type comes with its own visualizations.

Image Embeddings DataRobot AI Cloud

You can use the Feature Impact tool to see which features had the most influence on the clustering outcomes.

Feature Impact DataRobot AI Cloud

Step 3. Name Clusters

Focusing on one cluster at a time or one descriptive feature at a time, you can now work your magic, understanding what “makes” a cluster and what the differences are between clusters. Is Cluster 3 made of houses with acres and square feet, and a number of bathrooms in the high-end of the dataset range? Do the pictures of the exterior show that it’s mostly mansions? Does the geospatial data show the homes are mostly downtown? Let’s go ahead and name this cluster “Downtown Mansions.”

That’s where you’re making the real, valuable, and impactful decision.

Step 4. Getting the Model Deployed

When you’re done with naming all the clusters and the story it tells, as with any other model, you can get your model deployed in one click, which allows you to effectively server cluster assignment requests later on. With MLOps capabilities, it’s super easy to manage and monitor your deployed model, with less technical, low-value decisions for you to make.

Model Deployment DataRobot AI Cloud

Keeping Your Finger on the Pulse of Your Business

As you can see, DataRobot’s multimodal clustering removes as many technical decisions out of your way as possible so that you can quickly focus on where your expertise matters most: iterating over and selecting the model, then crafting a story. With any dataset, to support any business case. No more sleepless nights. No more missed opportunities to shine in front of your business stakeholders. 

Ready to Try?

If you are a DataRobot AI Platform user, get your hands on multimodal clustering today. Access public documentation to find more enablement materials. If you are new to DataRobot, reach out to our team to get a deeper dive demo of the platform. Want to continue a discussion? Share your questions in the Data Science Community.  

Start Your AI Journey Today
Request a Demo
About the author
Sylvain Ferrandiz
Sylvain Ferrandiz

Product Director, Data Science, DataRobot

Sylvain Ferrandiz is a Senior Product Director at DataRobot focused on Machine Learning and Data Science capabilities. He has spent 15 years working in machine learning, data science and analytics, in a variety of roles and enterprises – as a research scientist, manager, product owner and in user relations. He’s passionate about identifying, understanding, and helping to solve problems companies and users have when trying to generate tangible business value out of their machine learning, data science, and analytics initiatives.

Meet Sylvain Ferrandiz
  • Listen to the blog
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog