The AI Contribution to Decision-Making
A loan application has a predicted likelihood of 80% of going bad – so what? Your artificial intelligence (AI) system has given you this “predicted feature” in addition to what you already know about the applicant. It is one of many features that a human would use to make a decision to accept or decline the application. Business rules set by the credit committee to control business risk and the loan portfolio also constrain accepting this application and advancing the loan. If the predicted likelihood of default was just 50%, or just 20% would the decision be different? Well, that might depend on the rules.
In this instance AI is a contributor to the decisions we take because of the prediction it makes. In many instances, AI makes explicit one or more features of the decision, which previously were estimated by a human using judgment and intuition.
So many of the potential use cases for AI are like this that we see a growing number of organizations now looking at how to use AI from a different angle. They are confident that predictions can be made with some degree of accuracy and confidence, but the real question now is “so what?”
What Decisions Can Change – and How?
Rather than waiting until a prediction has been made, savvy business people are focusing not on what machine learning (ML) techniques might result in an interesting prediction but instead are turning their minds to the question “What do I need to know to change the way we make decisions?”
Most successful ML and AI projects are replacing an existing prediction with a more accurate and/or faster one. Human intuition is relied on, or an older, hand-build BI/predictive model exists, or perhaps a set of heuristics were documented that more or less predict something. Replacing these with a more accurate (and rational) AI/ML model in an existing system or process is generally straightforward, because there is a context and environment in which the new model can succeed.
Far fewer AI models succeed in creating business value when there is no existing prediction to replace. Too often the new model fails the “so what” test. The prediction is accurate, insightful, and interesting but not, in the end, useful, because no one can determine how to use the prediction to materially change the behavior of the organization in a way that creates business value.
As an AI adopter, the decisions that are important to your business are already being made today. But they may well be made using rules of thumb and intuition (or guesswork) that could be improved (frequency, speed, accuracy, consistency, etc.) using an AI model. Business rules (such as those set for the approval of loans) incorporate many factors to guide decision-making systems or human decision makers delegated to make decisions. For a predicted outcome – an AI model – to become one of the decision factors, it will be essential to determine the part it plays in the overall decision. Understanding this is a crucial step to monetizing the value of AI in organizations. To realize value, you must understand the AI model’s prediction impact on decisions and be able to determine what will change as a result of having the AI model. The best time to do this is before you have an AI model by beginning with the “so what” and working backwards to the AI model you need.
Macro and Micro Analysis
Many AI/ML projects start with macro analysis. They look at, say, how often delivery dates were hit or missed, and the characteristics of the projects or orders involved. They try to identify the factors that drive missed delivery dates and see if they can predict on-time and late deliveries. If this macro analysis succeeds, the team may be done – predicting it may be enough or the nature of the predictive model may make it clear that macro changes in scheduling and commitment can be made to ensure on-time delivery.
But often the changes that will need to be made are more granular. Micro decisions in the processes involved will need to be made differently for each project or order so that it gets delivered on time. There is no one solution that works for all of them. When that happens, the data science team needs to move forward with a Phase 2.
The choice of how much or each type of scrap to load a furnace with to melt down and achieve a target chemistry previously made once a month using a hand-crafted set of spreadsheets that frequently “broke” and had to be repeatedly fixed. By focusing on the decision – the mix of scrap to load for each furnace “heat” – and closed loop learning from the final chemistry of each heat, the accuracy and frequency of decision making was automated using a couple of ML models that predicted incoming scrap chemistry and then optimized outcome chemistry for scrap input cost. Micro decisions are now made each individual “heat” (every 35 minutes) of the amount of each of seven types of scrap to load into the furnace at the lowest cost to achieve target metal chemistry saving millions in input costs each year.
The best way to manage a Phase 2 is to work backwards. Start by identifying the decisions that could be made differently and the outcomes that could be improved. Modeling these decisions to create a business blueprint of the current decision-making approach is key, as it builds a shared understanding between those who must make changes (business and IT) and those doing the analysis (data science).
These blueprints will encapsulate the knowledge, expertise, regulations, and policies that constrain and guide the decision-making. Most of these will have to be applied even with a new AI/ML model so integrating the prediction with business rules or decision logic that represents these accurately and transparently is key.
This decision modeling makes explicit one or more ways AI models can improve the decision making – namely something that materially changes the decision. This might predict something new that can be used to make a better decision such as the predicted most profitable/highest likelihood of acceptance of retention offers for clients predicted to churn in the next 90 days. It might be a prediction that replaces human pattern matching (expert decisions), such as previous fraudulent claims patterns or the rate and speed of an advancing bushfire. Or it might be a prediction that enables discrimination that wasn’t possible before, such as differentiated pricing based on predicted delivery date.
The result of modeling the whole decision (rather than focusing only on making a prediction) is that it generates an explicit set of rules that will assimilate and evaluate decision inputs and features. It provides a common language between business analysts, architects, business owners, IT professionals, and AI/ML teams. Decisions are more easily tied to performance measures and to overall business goals, making it easier to focus teams where they will have the highest impact and to measure results.
With a shared understanding, the data science team can find out what kinds of predictions – with what time horizon and what accuracy – would allow these micro decisions to be made differently. Now that they know what the “so what” is, they can see if they can build the AI/ML model that is needed, confident that they understand what it is going to take to succeed. These models will likely be different from the initial ones, though they will probably share approaches, features, and datasets.
Once the data science team has working AI models that improve one or more of these micro-decisions, a new decision model can be built showing how the analytically improved decision will be made. This go-forward blueprint can be implemented – either as guidelines for human decision-makers, or the specification of an automated decision service.
Drive Through to Implementation
For human decision-makers, decision models can be used to ensure that the business intelligence or environment presents data and predictions in ways that mesh with the defined decision-makers. For automated decision-making, decision models show what rules need to be automated – both those tightly coupled to the prediction that should be integrated into its deployment and those handling broader regulations and policies that need to be managed on their own change cycle. The explicit management of both ensures compliance (especially when transparent and explainable AI models are used), and the business ownership necessary to create business value.
Such an approach also provides a framework for continuous improvement. With a clear definition of the decision-making approach, decisions made can be logged. What predictions did we make, what were the key factors in those predictions, and how were those predictions combined with regulatory and policy constraints used to come up with a profitable, legal, and effective decision? This reduces concerns about using AI models in production, while also supporting continuous improvement – the ongoing review of how decisions were made and how that worked out in business terms – so that the rules, the AI models, and the overall decision model can be improved and optimized overtime.
AI models, especially transparent and explainable AI models, are potentially transformative. For many larger organizations, the constraints of regulations, policies, contracts, long standing business relationships, brand permission, and much more can seem like impassable barriers to this transformation. But a focus on decisions, on the “so what” of AI models, can break down those barriers and allow AI models to be deployed and integrated into business operations where they can create real business value.
DataRobot is the leader in Value-Driven AI – a unique and collaborative approach to AI that combines our open AI platform, deep AI expertise and broad use-case implementation to improve how customers run, grow and optimize their business. The DataRobot AI Platform is the only complete AI lifecycle platform that interoperates with your existing investments in data, applications and business processes, and can be deployed on-prem or in any cloud environment. DataRobot and our partners have a decade of world-class AI expertise collaborating with AI teams (data scientists, business and IT), removing common blockers and developing best practices to successfully navigate projects that result in faster time to value, increased revenue and reduced costs. DataRobot customers include 40% of the Fortune 50, 8 of top 10 US banks, 7 of the top 10 pharmaceutical companies, 7 of the top 10 telcos, 5 of top 10 global manufacturers.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
Optimizing Large Language Model Performance with ONNX on DataRobot MLOpsJune 1, 2023· 11 min read
Belong @ DataRobot: AAPI Heritage Month with the ACTnow! CommunityMay 25, 2023· 3 min read
Deep Learning for Decision-Making Under UncertaintyMay 18, 2023· 5 min read
General Electric (GE) wanted to boost the energy production of its wind farms by 20 percent, which would produce up to an estimated $50 billion in value. That’s when GE turned to AI. Read more.
Humble AI is a new feature in DataRobot that protects the quality of your predictions in situations where the model may be less confident. With Humble AI, users create rules for models in deployment for predictions made in real-time. These rules identify conditions that indicate quantifiably that a prediction may be unsure, and then can trigger actions like defaulting to…