Delivering Trusted AI with Humility Rules
This post was originally part of the DataRobot Community. Visit now to browse discussions and ask questions about DataRobot, AI Platform, data science, and more.
The Humility tab is useful for configuring rules that enable models to recognize, in real-time, when they make uncertain predictions or receive data they haven’t seen before. In these cases, you can specify actions to take for individual predictions, allowing you to have desired behaviors with rules that depend on different triggers.
You use humility rules to associate trigger conditions with corresponding actions. Using humility rules helps to mitigate risk for model predictions in production.
You create humility rules for a deployment from the Humility > Rules tab. If you have not yet enabled humility for a model, toggle on Enable humility. Then, create a new rule by selecting Add Rule. Alternatively, you can select Use Recommended Rules, which will generate two automatically configured humility rules: one is a trigger for uncertain predictions, and the other if for handling outliers for the most important numeric feature.
Clicking Add Rule brings up options for configuring the new rule. Click the pencil icon to give the new rule a name. Directly below are two dropdown lists: one to select a trigger that detects a rule violation and one to select the action to apply to the violating prediction (i.e., the trigger).
Creating a New Humility Rule
There are three triggers available, and each trigger has additional values to configure how it detects violations.
Uncertain Prediction detects whether a prediction’s value that falls between defined thresholds. You set the lower-bound and upper-bound thresholds for prediction values, by either entering the values manually or clicking Calculate to use computed values derived from the holdout set. For regression models, the trigger detects any values outside of the configured thresholds; for binary classification models, the trigger detects any prediction’s probability value that is inside of the thresholds.
Outlying Input detects if the input value of a numeric feature is outside of the configured thresholds. To configure the outlying input trigger, select a numeric feature and set the lower-bound and upper-bound thresholds for its input values. Enter the values manually or click Calculate to use computed thresholds derived from the training data of the model (computed thresholds is available only for models built within DataRobot).
Low Observation Region detects if the input value of a categorical feature is not included in the list of specified values. For this trigger, select a categorical feature and enter one or more values. Any input value that appears in prediction requests but does not match the indicated values triggers the corresponding action.
After you select any trigger, choose a corresponding action to apply for a rule violation. There are three actions available; each can be used with any of the three triggers.
No operation results in no changes being made to the detected prediction value.
Override prediction modifies predicted values for rows violating the trigger with the value configured by the action. Set a value that will overwrite the returned value for predictions violating the trigger. For binary classification and multiclass models, the indicated value can be set to either of the model’s class labels (e.g., “True” or “False”). For regression models, manually enter a value or use the maximum, minimum, or mean provided by DataRobot (available only for models built within DataRobot).
Throw error returns a 480 HTTP error with the predictions for any row in violation of the trigger. This also contributes to the data error rate on the Service Health tab. You can use the default error message provided, or specify your own custom error message. This 480 HTTP error and message will appear as part of the predictions.
When you’ve finished configuring the rule, click Add to save it.
When you’re set with all humility rule changes for the deployment, click Submit; if you navigate away from the tab before submitting the rule changes, they will not be saved.
Once saved and submitted, DataRobot will monitor the deployment using the new rules along with any previously created ones. After a rule is created, the prediction response body returns the humility object.
Editing Humility Rules
To edit a rule, select the pencil icon for that rule next to the name of the rule. You can change the trigger, action, and any associated values for a rule. When you are finished, click Save changes.
To delete a rule, select the trash can icon for that rule.
To re-order the rules listed, simply drag-and-drop them to the desired order.
After you have made any of the changes described above, click Submit to finalize your edits (in addition to having selected Save changes after editing a rule). Submit is the only way to permanently save rule changes.
Viewing humility data over time
After configuring rules and making predictions with humility monitoring enabled, you can view the humility data collected for a deployment from the Humility > Summary tab.
The X-axis measures the range of time during which predictions have been made for the deployment. The Y-axis measures the number of times humility rules have triggered for the given period of time.
The controls for model version and the data time range selectors work the same as those available on the Data Drift tab.
- Blog – Introducing DataRobot Humble AI
- Webinar – Humble AI: Building Guardrails Against Overconfidence
- Ebook – Humility in AI
- DataRobot public documentation – Humility Rules
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
New DataRobot and Snowflake Integrations: Seamless Data Prep, Model Deployment, and MonitoringMarch 16, 2023· 5 min read
How the DataRobot AI Platform Is Delivering Value-Driven AIMarch 16, 2023· 4 min read
A New Era of Value-Driven AIMarch 16, 2023· 2 min read
As machine learning and artificial intelligence (AI) usher in the Fourth Industrial Revolution, it seems like everyone wants to get in on the action. And who can blame them? AI promises improved accuracy, speed, scalability, personalization, consistency, and clarity in every area of business. With all those benefits, why are some businesses hesitating to move forward? On the one hand,…
I was deeply honored to be invited to speak at this year’s AI Experience Worldwide, especially since I got to address a topic that I believe in strongly: the need for trust in AI. I believe we’re at a crossroads, and as AI enters the mainstream, it is imperative that it upholds the public’s trust. We need to create responsible…
Do you trust the predictions your models are generating? Let’s think about this for a moment. Machine learning models learn from historical data and use the patterns they discover to make predictions about new data. If this were a perfect world, the model would have a stable problem to solve, copious amounts of training data, and the new data received…