When it comes to predictive modeling, nothing is more important than building trust in the models you are using. Individuals and organizations need to know why the models came to the conclusions that they did and why those answers are trustworthy. Explanations should be suitable for all stakeholders and written in language that is understandable by the people affected by the model.
Read the white paper, XEMP Prediction Explanations with DataRobot, to learn the intricacies of prediction explanations and why DataRobot’s XEMP offers more reliable and consistent explanations than other open source explanation methodologies, such as LIME.
Learn how XEMP:
- Works directly on the actual model that is deployed and doesn’t rely on an approximation of the model.
- Is stable with every explanation resulting in a unique and replicable prediction. You won’t get two different explanations for the same prediction.
- Is much more scalable - XEMP is an enterprise-ready solution for providing fast prediction explanations.
- Is easily explainable, answering the questions asked by stakeholders, not using abstract mathematical concepts such as gradients.
- Provides stronger explanations. Stronger explanations are critical as they provide a quantitative basis for evaluating predictions.