DataRobot Own AI Use Case BG v.1.0

Hands-On Learning: How I Created My Own AI Use Case

August 21, 2020
by
· 6 min read

I started interning at DataRobot on May 11th. As a student, I haven’t had much experience with real-world applications of data science. Many of the courses I have taken are theory-based mathematics or programming, which has helped me understand the modeling methods and statistical analysis DataRobot automates but not so much applying it to business problems. In other words, when tasked with contributing to DataRobot Pathfinder, I felt comfortable with the tech implementation but was very intimidated by the business implementation. I am a hands-on type learner, so while the DataRobot Essentials class was beneficial, the best way for me personally to learn how to navigate this new technology was to explore the platform. With the luxury of researching any industry and business problem, creating my use case was as fun and exciting an experience as it was informative.

I chose the media and entertainment industry because as a film minor — something I love to mention as it makes me seem much more interesting — I figured it would be compelling to see how data science and automated machine learning transforms that field. I read through other use cases, and the DataRobot AI Simplified blog post, What Makes a Good Machine Learning Use Case? to figure out the right type of problem to solve. I landed on digital media publishers deciding which articles to lock behind a paywall. With ad-blocking technology, paid subscription access is a popular way publishers can still monetize their websites. Therefore building a more effective paywall has become a widespread use case across the industry. Taking into consideration the four-step checklist mentioned in the blog:  

  1. Do you know what you want to predict? Whether or not the article is worthy of paywalling. 
  2. Is there any historical data to work from, and does it contain measurements of what you want to predict? I was able to mock up some data, which I detail a bit later.
  3. What changes will be made by having a prediction? With more insight, staff can make more informed decisions about each article. This could be useful feedback for authors to leverage in new articles. 
  4. What impact will this have? Adjusting the company’s focus and content allows them to allocate resources better, which could boost subscription revenue.

I used my answers to those questions, as well as some background and industry research to detail the business problem at hand and its possible intelligent solution. That completed the overview section of my use case. 

Moving on to the tech implementation portion. This section involved describing my data and its preparations, the model training process, sharing some of the results and their interpretations, and evaluating the model’s accuracy. 

Since I am not a digital media publisher, I had to create my historical data. The dataset resembled realistic metadata for 1,000 articles that encompass key characters that are important to predicting an article’s likelihood to lead to a subscription. These key characters, such as themes, reading time, authorship, and recency, became the features. Those examples illustrate a few of the many varieties of feature types able to be ingested by DataRobot, in respective order, text, numeric, categorical, and date. After randomizing the inputs for those features (ten in total), I had to create a formula that would generate the target variable that mirrored real-life relationships to the different features. 

Then came the entertaining part. With a quick upload to DataRobot, I processed and partitioned the dataset and automatically ran dozens of models within minutes. As a data science major, I have spent most of my time learning about hand-coding and manually testing various methods, models, and partitioning processes. So, to see it all automated so quickly was impressive but a bit humorous, considering how long it would have taken me. 

The insights and tools supplement what I’m learning in my classes; the calculations and visual coding behind them will probably be something my undergraduate curriculum leaves out. If I wanted these same visuals, it would involve lots of additional research, but DataRobot pulls the information all together in their Python client

I included feature impact, partial dependence plots, and prediction explanations, in the results and interpretations, and I added the lift chart, ROC curve, and confusion matrix in the evaluating accuracy. Of all six I just listed, I only learned about ROC Curves and Confusion matrices. Through the rest, I was able to learn more about their coding and meaning with the various documentation that the DataRobot platform provides. 

The last thing I had to do for this part was deciding on the prediction threshold. This decision stems typically from a business rule aided by the ROC curve and confusion matrix. In my case, I didn’t think I needed to optimize one precision, false-positive rate, non-worthy articles labeled worthy, versus true-positive rate, worthy articles marked worthy, over the other, so I chose a threshold that minimized misclassifications overall. Another simple post-processing step is separating probabilistic predictions into labels high, medium, or low, depending on where they fall in the threshold; this way, specific actions can be taken for each level. I will touch on this step’s significance later on, but that step completes the tech implementation. 

Finally, I had to write up the business implementation, which included describing the model deployment, the decision environment, stakeholders, and process, and any implementation risks. Luckily, I was able to leverage my data science colleagues’ thought leadership to create best practices around this. 

Data Science Fails
Building AI You Can Trust

There are three main decision maturities –automation, augmentation, or a blend of the two. I chose augmentation because I still thought the knowledge and expertise of editors and writers would be valued. Instead of automating the predictions into the data pipeline, the predictions act more as an intelligent machine aid with in-depth knowledge of readers. I deployed a “Light Gradient Boosting on ElasticNet Predictions” model near the top of the leaderboard. Since I am not a very technical person, I figured DataRobot’s Predictor application, a frontend dashboard that makes it very easy to score new data, would be a perfect fit. 

The decision stakeholders include: any decision executors, those who will directly consume the predictions and make decisions on a daily or weekly basis; decision managers, those who will oversee the decisions being made; and decision authors, those who are the technical stakeholders who will set up the decision flow. 

The decision process is all the possible decisions that executors can make across the various thresholds, created from the post-processing step separating predictions into low, medium, and high, to impact the final process. Deciding these actions took the most research and industry expertise and represents the main effect of the model. If an article is scored low, I think it’s reasonable to leave those articles outside of the paywall as filler. However, as a consumer of online media, if the publisher locks all of the “good” content behind a paywall, I have no gauge of the quality of content that a subscription would lead. The score answers which articles should be for subscription only, or kept outside to lure the customer into a subscription as a loss leader. Medium scored articles are still “good” enough to attract subscribers while high scored articles can be locked down and marketed more on social media. 

The implementation risks had common threads throughout the various use cases I read, so it was more about seeing which ones I thought were related to my use case. The media and entertainment industry relies heavily on understanding trends, so I noted under risks that retraining the model often and making sure all decisions weren’t automated was vital, writers will still have to rely on their ingenuity to understand the rising social trends that people want to read.

This experience taught me a lot about applying what I have learned in my university classes to the real world, but it also supplemented the technical skills I already had. If you are interested in my use case, you can read it here.

Use Case Library
Find your path to AI success with Pathfinder
Explore
About the author
Sara Cooper
Sara Cooper

Data Science Intern, DataRobot

Sara interned with DataRobot in the summer of 2020 with a focus in customer-facing data science. Sara is currently working towards her B.S. in Data Science as a senior at the University of Michigan.

Meet Sara Cooper
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog