Humility in AI: Building Trustworthy and Ethical AI Systems
At DataRobot, we believe humility is a key step in building more trustworthy AI models. What do we mean by humility in AI? We mean that even a model built according to data science best practices may still have particular areas of weakness or vulnerabilities when confronted with new data to score. For example, that data may have quality issues, or may result in a prediction in a region of low confidence for the model. These vulnerabilities or uncertainties in a model’s predictions can be defined mathematically, and that humility then enables the user of an AI system to augment the model’s machine intelligence with human expertise and guidance.
No model is going to be perfect. But with a little humility and the injection of human intelligence, a model’s predictions and resulting decisions can maintain alignment with the needs of your business process, even in tricky situations. The end goal of enterprise AI is not realized just by accuracy in a model’s predictions, but by guaranteeing the best data-to-value from your AI system.
In June, DataRobot introduced Humble AI, a new feature in DataRobot that protects the quality of your predictions in situations when the model may be less confident. With Humble AI, users create rules for models in deployment for predictions made in real-time. These rules identify conditions that indicate quantifiably that a prediction may be unsure, and then can trigger actions like defaulting to a “safe” prediction, overriding outlier values, or not making a prediction at all.
Humility in AI describes in-depth the various ways that the practice of humility in AI can not only protect your enterprise decision-making in real-time, but also provide feedback on your overall process, identifying integrity issues, blindspots, and changing requirements, that can inform iterative improvements to the AI decision system.