Humility in AI: Building Trustworthy and Ethical AI Systems
At DataRobot, we believe humility is a key step in building more trustworthy AI models. What do we mean by humility in AI? We mean that even a model built according to data science best practices may still have particular areas of weakness or vulnerabilities when confronted with new data to score. For example, that data may have quality issues, or may result in a prediction in a region of low confidence for the model. These vulnerabilities or uncertainties in a model’s predictions can be defined mathematically, and that humility then enables the user of an AI system to augment the model’s machine intelligence with human expertise and guidance.
No model is going to be perfect. But with a little humility and the injection of human intelligence, a model’s predictions and resulting decisions can maintain alignment with the needs of your business process, even in tricky situations. The end goal of enterprise AI is not realized just by accuracy in a model’s predictions, but by guaranteeing the best data-to-value from your AI system.
In June, DataRobot introduced Humble AI, a new feature in DataRobot that protects the quality of your predictions in situations when the model may be less confident. With Humble AI, users create rules for models in deployment for predictions made in real-time. These rules identify conditions that indicate quantifiably that a prediction may be unsure, and then can trigger actions like defaulting to a “safe” prediction, overriding outlier values, or not making a prediction at all.
Humility in AI describes in-depth the various ways that the practice of humility in AI can not only protect your enterprise decision-making in real-time, but also provide feedback on your overall process, identifying integrity issues, blindspots, and changing requirements, that can inform iterative improvements to the AI decision system.
Sarah is an Applied Data Scientist on the Trusted AI team at DataRobot. Her work focuses on the ethical use of AI, particularly the creation of tools, frameworks, and approaches to support responsible but pragmatic AI stewardship, and the advancement of thought leadership and education on AI ethics.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
How AI Helps Address Customer and Employee ChurnJune 8, 2023· 4 min read
Optimizing Large Language Model Performance with ONNX on DataRobot MLOpsJune 1, 2023· 11 min read
Belong @ DataRobot: AAPI Heritage Month with the ACTnow! CommunityMay 25, 2023· 3 min read
Do you trust the predictions your models are generating? Let’s think about this for a moment. Machine learning models learn from historical data and use the patterns they discover to make predictions about new data. If this were a perfect world, the model would have a stable problem to solve, copious amounts of training data, and the new data received…
After a decade of relative economic stability, we are now confronted by the COVID-19 pandemic, with many financial analysts labelling it as a ‘black swan’ event. A ‘black swan’ is a metaphor for something unexpected which has a major impact. These type of events can cause significant disruption to business processes, financial markets, and our lives. Black swans prompt us…
If you’re like me and cannot write a single line of code and have no idea what linear regression is, then keep reading—because this is about your business and how not to end up in a mess with the wrong AI vendor. Last week, I was reading a success story about an organization that implemented an AI solution and came…