Churn Prediction with Streamlit App

Churn / Retention Streamlit Telecommunications

This notebook shows how to build an app presenting Datarobot predictions and explanation for your business stakeholders.

Request a Demo

This workflow uses churn prediction use case as an example to build an app using prediction output. Summary of usecase can be found here.

Overview of steps:

  1. Fetch predictions and prediction explanation from a datarobot deployment (Notebook)
  2. Save the output of the prediciton as a csv which can then be used as a backend for the Streamlit app (Please note that an alternate approach would be to directly request predicitons to DR prediction API and generate predictions on the fly in the app)
  3. Streamlit_app.py file contains code to generate the frontend of the app which helps user access the churn prediction score and provides information of top churn reason for the population selected by the user

Relevant documentation:

  1. Datarobot batch prediction API
  2. Streamlit

Prerequisite:

  1. To use this workflow for your own use case, you should have created and deployed a model using Datarobot. A short tour is available here at Build and deploy AI models
  2. Have the deployment id for your deployment
  3. Have the scoring dataset ready for making predcitions/ fetch prediction explanations

To learn and test how the app would look like, just follow this notebook and you can use the example prediction dataset here for churn prediction

Instructions to use this workflow for your use case:

  1. Enter your Datarobot API token and end point in the “Connect to Datarobot section”
  2. Enter your deployment id in “Fetch information about your deployment” section
  3. Load the dataset on which you want to make predictions
  4. Export the predictions obtained in step “Request predictions”
  5. Modify the relevant sections in the streamlit_app.py file

Import libraries

In [1]:

import json

import datarobot as dr
import numpy as np
import pandas as pd
import requests
import streamlit as st
  1. In DataRobot, navigate to Developer Tools by clicking on the user icon in the top-right corner. From here you can generate a API Key that you will use to authenticate to DataRobot. You can find more details on creating an API key in the DataRobot documentation
  2. Determine your DataRobot API Endpoint: the API endpoint will be the same as your DataRobot UI root. Replace {datarobot.example.com} with your deployment endpoint.

API endpoint root: https://{datarobot.example.com}/api/v2

For users of the AI Cloud platform, the endpoint is https://app.datarobot.com/api/v2

In [ ]:

DATAROBOT_API_TOKEN = "[ENTER YOUR API KEY]"
DATAROBOT_ENDPOINT = "[https://{datarobot.example.com}/api/v2]"
dr.Client(token=DATAROBOT_API_TOKEN, endpoint=DATAROBOT_ENDPOINT)

Fetch information about your deployment to make predictions

To generate predictions on new data using the Prediction API, you need:

The model’s deployment ID. You can find the ID in the sample code output of the Deployments > Predictions > Prediction API tab (with Interface set to “API Client”).

In [4]:

# Get this information from prediction>real time tab of your deployment
DEPLOYMENT_ID = "ENTER YOUR DEPLOYMENT_ID"

Load the scoring data

This workflow assumes that you have the data to be scored as a csv file saved on your computer

In [4]:

# import the scoring file
scoring_data = pd.read_csv("prediction_data_SHAP.csv")
In [5]:

# display rows
scoring_data.head()
01234
Customer_ID8779-QRDMV7495-OOKFY1658-BYGOY4598-XLKNJ4846-WHAFZ
DependentsNoYesYesYesYes
Number_of_Referrals01011
Tenure_in_Months18182537
Internet_TypeDSLFiber OpticFiber OpticFiber OpticFiber Optic
Internet_ServiceYesYesYesYesYes
ContractMonth-to-MonthMonth-to-MonthMonth-to-MonthMonth-to-MonthMonth-to-Month
Paperless_BillingYesYesYesYesYes
Payment_MethodBank WithdrawalCredit CardBank WithdrawalBank WithdrawalBank Withdrawal
Monthly_Charge39.6580.6595.4598.576.5
Zip_Code9002290063900659030390602
Lat_Long34.02381, -118.15658234.044271, -118.18523734.108833, -118.22971533.936291, -118.33263933.972119, -118.020188
Latitude34.0238134.04427134.10883333.93629133.972119
Longitude-118.156582-118.185237-118.229715-118.332639-118.020188

Request predictions

In [6]:

# Create a batch prediction job to get predicitons and explanations
job, df = dr.BatchPredictionJob.score_pandas(
    DEPLOYMENT_ID, scoring_data, max_explanations=5, passthrough_columns=["Customer_ID"]
)
Streaming DataFrame as CSV data to DataRobot
Created Batch Prediction job ID 63f44acb42a1b29724edc262
Waiting for DataRobot to start processing
Job has started processing at DataRobot. Streaming results.

Export the predicted file as a csv

In [8]:

df.to_csv("prediction_output_SHAP.csv", index=False)

Use the prediction output obtained from Datarobot (‘prediction_output_SHAP.csv’) as backend data for your streamlit app

At this point you will navigate to the streamlit_app.py file to make modifications based on your prediction dataset and then run the streamlit code below to test your app.

In [ ]:

!streamlit run streamlit_app.py --theme.base 'dark'
Get Started with Free Trial

Experience new features and capabilities previously only available in our full AI Platform product.

Get Started with Churn Prediction

Explore more Streamlit AI Accelerators

Explore more AI Accelerators