Predict Claims Severity from First Notice of Loss

Insurance Claims Decrease Costs Automation End to End Other Regression
Predict the ultimate payout for each claim at First Notice of Loss (FNOL); a component of claims straight through processing (STP).
Build with Free Trial

Overview

Business Problem

Claim payments are typically an insurance company’s largest cost, and unpaid claims are often its largest liability. For long-tail lines of business such as workers compensation, which covers medical expenses and lost wages for injured workers, claims can take years to be paid in full. This means the true cost of a claim may not be known for many years. As insurers need to have sufficient reserves to pay their liabilities when they are due, it becomes critical for them to accurately estimate the case reserve required for each claim at first notice of loss (FNOL). However, as claims adjusters often manually assign case reserves based on their own experiences and static company guidelines, case reserves are frequently not as accurate as they can be, leading to inefficiencies that add up to high loss adjustment expenses.

Intelligent Solution

AI improves the accuracy of predicting a claim’s ultimate payout by learning complex patterns in claims and policy data. This allows claims departments to improve their efficiency of handling and triaging claims. Based on their predicted severity, claims can be routed to junior claim adjusters, senior claim adjusters, or straight-through processing where it will be processed automatically. Depending on the book of business, the workers’ compensation line of business can see up to 60% of claims go through straight-through processing. In addition, by allowing claim adjusters to focus on more severe and complex claims, insurers can reduce the number of claims that go into litigation and reduce the ultimate claim severity. AI will provide claims adjusters with the statistical reasons behind each claims predicted payout, allowing them to make well informed decisions on how to respond. Improving the allocation of claims when they are received helps insurers drastically reduce their loss adjustment expenses.

Technical Implementation

About the Data

For illustrative purposes, in this tutorial we are going to use a synthetic dataset showing workers’ compensation claims. Although synthetic, this dataset follows the patterns that an actuary would expect in the real world where there exists both structured and unstructured text. A snapshot of the data can be found below.

Problem Framing

The target variable for this use case is Incurred which is the total payout for a particular closed claim. This is a continuous, non-negative target. 

Below are some commonly available features for workers’ compensation claims, many of which are available at FNOL. One key consideration is to be sure that any attributes you use to build a model will be available at prediction time. Failing to do this will lead to target leakage.  For example, using the number  of dependent children from a future date  will introduce severe target leakage.  This information will typically not be known at FNOL and will usually only become available later for severe claims (because the insurer typically will never need to know this information for routine claims). 

Therefore, including the number of dependent children that you know to be true today will lead to a model that predicts a low value for severity when that field is missing and a high value when it is not.  Then you will score that model at FNOL when that field is always missing….You see the problem!  You will end up with predictions that are too low in aggregate. (For more information about target leakage, see this article.)

Sample Feature List
Feature NameData TypeDescriptionData Source
ReportingDelayNumericNumber of days between the accident date and report dateClaims
AccidentHourNumericTime of day that the accident occurred.Claims
AgeNumericAge of claimantClaims
Weekly RateNumericWeekly salaryClaims
GenderCategoricalGender of the claimantClaims
Marital StatusCategoricalWhether the claimant is married or notClaims
HoursWorkedPerWeekNumericThe usual number of hours worked per week by the claimantClaims
DependentChildrenNumericClaimant’s number of dependent childrenClaims
DependentsOther NumericClaimant’s number of dependents who are not childrenClaims
PartTimeFullTimeWhether the claimant works part time or full timeClaims
DaysWorkedPerWeekNumericNumber of days per week worked by the claimantClaims
DateOfAccidentDateDate that the accident occurredClaims
ClaimDescriptionTextText description of the accident and injuryClaims
ReportedDayNumericDay of the week that the claim was reported to the insurerClaims
InitialCaseEstimateNumericInitial case estimate set by claim staffClaims
IncurredNumerictarget : final cost of the claim = all payments made by the insurerClaims
Data Preparation 

The example data is organized at claim level; each row is a claim record, with all the claim attributes taken at first notice of loss. On the other hand, the target variable, Incurred, is the total payment for a claim when it is closed. So there are no open claims in the data. 

A workers’ compensation insurance carrier’s claim database is usually stored at the transaction level. That is, a new record will be created for each change to a claim, such as partial claim payments and reserve changes. For the purpose of this project, we need to take a snapshot of the claim (and all the attributes related to the claim) when it is first reported and then again when the claim is closed (target – total payment). Policy-level information can be predictive as well such as class, the industry, job description, employee tenure, size of the employer, whether there is a return to work program, etc. Policy attributes should be joined with the claims’ data to form the modeling dataset.

Model Training

The modeling data can be uploaded to DataRobot’s platform in multiple ways. (See this community article for more information.)

Once the modeling data is uploaded to DataRobot, exploratory data analysis (EDA) will be run automatically to produce a brief summary of the data, including descriptions of feature type, summary statistics for numeric features, and the distribution of each feature. Data quality is checked at this step as well; the guardrails that DataRobot has in place help ensure that only appropriate data is used in the  modeling process. Furthermore, DataRobot automates other steps in the modeling process such as partitioning the dataset. DataRobot will set aside 20% of the data as a holdout set which is one of the guardrails in place to avoid overfitting.

For this particular use case, we will use the default settings suggested by DataRobot. DataRobot AutoML automates many parts of the modeling pipeline so you are free to tune many details. Instead of having to hand-code and manually test dozens of models to find the one that best fits your data and needs, DataRobot automatically runs dozens of models and finds the most accurate one.

We will jump straight to interpreting the model results. Take a look here to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation.

Interpret Results

Feature Impact

Once a model is built, it is helpful to know which features are the key drivers of the model. Feature Impact ranks features based on feature importance, from the most important to the least important, and also shows the relative importance of those features. In the example below we can see that ClaimDescription is the most important feature for this model, followed by WeeklyRate, Age, HoursWorkedPerWeek, etc.

Partial Dependence Plot

Now that we know which features are important to the model, we would also like to know how each feature affects the predictions. This is what the Feature Effects plot tells us. In the following image, we see the partial dependence for the WeeklyRate variable, and it can be observed that generally claimants with lower weekly pay have lower claims severity; while claimants with higher weekly pay have higher claims severity. 

Prediction Explanations

People like explanations. When a claims adjuster sees a low prediction for a claim, their first question is likely to be “what are the drivers for such a low prediction?”. Insights provided at an individual prediction level cannot only help claims adjusters understand how a prediction is made, but also increase confidence in the model. By default, DataRobot provides the top 3 Prediction Explanations for each prediction, but you can request up to 10 explanations. Model predictions and explanations can be downloaded in a CSV file and you can control which predictions will be populated in the downloaded CSV file by specifying the thresholds for high and low prediction. The graph below shows the top 3 explanations for the 3 highest and lowest predictions. The graph here shows that, in general, high predictions are associated with older claimants and higher weekly salary; while the low predictions are associated with lower weekly salary.

Text Analytics

ClaimDescription is an unstructured text field. DataRobot builds text mining models on textual features, and the output from those text-mining models will automatically be used as inputs into subsequent modeling processes. Below is a Word Cloud for ClaimDescription which shows the keywords parsed out by DataRobot. Size of the word indicates how frequently the word appears in the data: strain appears very often in the data while fractured does not appear as often; and color indicates severity: both strain and fractured (red words) are associated with high severity claims while finger and eye (blue words) are associated with low severity claims.

DataRobot can transform and model textual data in very sophisticated ways. There are hundreds of blueprints with different ways text data is transformed and manipulated. TF-IDF and other transformers are applied to the raw text data and sent to sophisticated models. DataRobot is also multilingual, using automatic language identification for text data and supporting different text mining algorithms, depending on the language it detects. The process of feature engineering free text data in the traditional way is notoriously complex and difficult, and data scientists often avoid doing it manually. DataRobot automatically finds, tunes, and interprets the best text mining algorithms for a dataset, saving both time and resources.

Evaluate Accuracy 

The Lift Chart shows how effective the model is in terms of differentiating lowest risks (on the left) from highest risks (on the right), and the fact that the actual values (orange curve) closely tracks the predicted values (blue curve) tells us that the model is fitting the data well.

Post-Processing

A prediction for claim severity can be used for multiple different applications, requiring different post-processing steps for each. Primary insurers may use the model predictions for claim triage, initial case reserve determination, or reinsurance reporting. For example, for claim triage at FNOL, the model prediction can be used to determine where the claim should be routed to. A workers’ compensation carrier may decide all claims with predicted severity under $5000 should go to straight-through processing (STP); claims between $5000 and $20,000 will go through the standard process; claims over $20,000 will be assigned a nurse case manager; and claims over $500,000 are also reported to a reinsurer, if applicable. Another carrier may decide to pass 40% of the claims to STP; 55% to regular process; and 5% get assigned a nurse case manager so that thresholds can be determined accordingly. These thresholds can be programmed into the business process so that claims will go through the predesigned pipeline once reported and then get routed appropriately. Note that companies with STP should carefully design their claim monitoring procedures to ensure unexpected claim activities are captured.

In order to test these different assumptions, single or multiple A/B tests can be designed and run in sequence or parallel. Power analysis and p-value needs to be set before the tests in order to determine the number of required observations before stopping the test. In designing the test, think carefully about the drivers of profitability. Ideally you want to allocate resources based on the change they can effect, not just on the cost of the claim. For example, fatality claims are relatively costly but not complex, and so often can be assigned to a very junior claims handler. Finally, at the end of the A/B tests, we can identify the best combination based on the profit of each test.

Business Implementation

Decision Environment 

DataRobot makes it very easy to deploy a selected model into your desired decision environment, which is how you will embed the predictions into your regular business decision. Insurance companies often have a separate system for claims management. For this particular use case, it may be in the best interest of the users to integrate the model with the claims management system, and with visualization tools such as Power BI or Tableau. DataRobot provides REST API, CodeGen, and DataRobot Prime as options for the integration.

Decision Maturity 

Automation | Augmentation | Blend

If a model is integrated within an insurer’s claim management system when a new claim is reported, FNOL staff can record all the available information in the system. The model can then be run in the background to evaluate the ultimate severity. The estimated severity can help suggest initial case reserves and appropriate route for further claim handling (i.e., STP, regular claim adjusting, or experienced claim adjusters, possibly with nurse case manager involvement and/or reinsurance reporting). 

Carriers will want to include rules-based decisions as well, to capture decisions that are driven by considerations other than ultimate claim severity.

Model Deployment 

There are several ways the model can be deployed, depending on how ready it is to be put into production.

DataRobot Drag and Drop or REST API

Before the model is fully integrated into production, a pilot may be beneficial for:

  • Testing the model performance using new claims data.
  • Monitoring unexpected scenarios so a formal monitoring process can be designed or modified accordingly.
  • Increasing the end users’ confidence in using the model outputs to assist business decision making.

Connection to Other Systems

Once everyone in an insurer feels comfortable about the model and also the process, integration of the model with production systems can maximize the value of the model. The outputs from the model can be customized to meet the needs of claim management. 

Decision Stakeholders 
  • Claims management team
  • Claims adjusters
  • Reserving actuaries 
Decision Process

Most carriers do not set initial reserves for STP claims. For those claims beyond STP, model predictions can be used to set initial reserves at the first notice of loss. Claims adjusters and nurse case managers will only be involved for claims over certain thresholds. The reinsurance reporting process may benefit from the model predictions as well; instead of waiting for claims to develop to very high severity, the reporting process may start at FNOL. Reinsurers will certainly appreciate the timely reporting of high severity claims, which will further improve the relationship between primary carriers and reinsurers.

Model Monitoring 

Carriers implementing a claims severity model usually have strictly defined business rules to ensure abnormal activities will be captured before they get out of control. Triggers based on abnormal behavior (for example, abnormally high predictions, too many missing inputs, etc.) should be set up so that manual reviews will be triggered in a timely fashion. Regular reports may also be produced and distributed to different stakeholders. 

If a REST API is used to deploy the model, various metrics such as service health, data drift, and accuracy can all be monitored within the DataRobot platform.

Implementation Risks

A claims severity model at FNOL should be one of a series of models that should be built to monitor claim severity over time. Besides the FNOL Model, separate models should be built at different stages of a claim (e.g., 30 days, 90 days, 180 days, mature) to leverage the additional information available, to further evaluate the claim severity. Additional information comes in over time regarding medical treatments and diagnoses and missed work that allow improved accuracy as a claim matures.

banner purple waves bg

Experience the DataRobot AI Platform

Less Friction, More AI. Get Started Today With a Free 30-Day Trial.

Sign Up for Free
Insurance
Explore More Insurance Use Cases
Insurance companies are using machine learning and AI to increase top and bottom line through gaining competitive advantages, reducing expenses, and improving efficiencies. They are optimizing all areas of their business from underwriting to marketing in order to make data-driven decisions to lead to increased profitability.