Business Rules Are Integral to Successful AI
The final trust dimension of AI operations is the assessment of when and how to integrate an AI system into your processes. A set of business rules and expectations should govern your implementation of AI and guide it to deliver the most value to your enterprise.
When Should I Use an AI Model?
You know your business best, and any AI model is first and foremost just another tool at your disposal. The question of whether AI is appropriate for a particular use case is a question that should be asked prior to building any model. However, there is no reason not to continually revisit it and reevaluate if the current model and process continue to be appropriate.
For example, you might have a more complete picture of a developing situation than you can convey to the model. There might be certain events or time periods, like around major holidays or the launch of new products, when the normal behavioral patterns the model is trained to predict will not hold true. As another example, this responsiveness can also be protective in the evolving digital and social media landscape. Language and meaning are in constant flux online. Platforms are faster equipped to adapt to these changes with manual adjustments, overriding the model in select conditions, than the process of retraining and redeploying a model.
How Should the Model’s Predictions Be Consumed?
There are three main paradigms for how a model’s predictions are consumed:
- Human-in-the-loop: The human operator makes the final decision, taking into account the recommendation or prediction of the AI system
- Human-out-of-the-loop: The AI makes the final decision without the involvement of a human operator. This is full automation
- Human-over(seeing)-the-loop: The human operator plays a supervisory role, with the ability to intercede when the AI encounters unexpected scenarios or performs unexpectedly
The needs of the process likely dictate what is the most appropriate strategy for your model. When full automation is required, human-out-of-the-loop is the most feasible, though the productionalization of some humility triggers and actions can assist in uncertain situations.
Besides the Prediction, What Else Should You Share for the Final Decision?
In addition to the prediction, you can choose to output information on the model’s confidence and factors that inform its reasoning to your user. As discussed in the section on humility, information on model confidence can assist a human decision-maker in weighing and interpreting a prediction.
Explainability tools can be of great assistance in building user trust in a system. Similar to the standard practice with a credit score, a model can output how the top features and their input values influenced the prediction. Seeing those values confirmed, and getting some insight into the reasoning behind how the model used them, can go a long way to establishing trust in the prediction itself.