Minding Your Models
Using AI-based models increases your organization’s revenue, improves operational efficiency, and enhances client relationships.
But there’s a catch.
You need to know where your deployed models are, what they do, the data they use, the results they produce, and who relies upon their results. That requires a good model governance framework.
At many organizations, the current framework focuses on the validation and testing of new models, but risk managers and regulators are coming to realize that what happens after model deployment is at least as important.
No predictive model — no matter how well-conceived and built — will work forever. It may degrade slowly over time or fail suddenly. So, older models need to be monitored closely or rebuilt entirely from scratch.
Even organizations with good current controls may have significant technical debt from these models. Models built in the past may be embedded in reports, application systems, and business processes. They may not have been documented, tested, or actively monitored and maintained. If the developers are no longer with the company, reverse engineering will be necessary to understand what they did and why.
Automated machine learning (AutoML) tools make building hundreds of models almost as easy as building only one. Aimed at citizen data scientists, these tools are expected to dramatically increase the number of models that organizations put into future production and need to continuously monitor.
Reduce Risk with Systematic Model Controls
Every organization needs a model governance framework that scales as its use of models grows. You need to know if your models are at risk of failure or are measuring the right data. With growing financial regulations to ensure model governance and model risk practices, such as SR 11-7, you must also verify that the models meet applicable external standards.
This framework should cover such subjects as roles and responsibilities, access control, change and audit logs, troubleshooting and follow-up records, production testing, validation activities, a model history library, and traceable model results.
Using DataRobot MLOps
Our machine learning operations (MLOps) tool allows different stakeholders in an organization to control all production models from a single location, regardless of the environments or languages in which the models were developed or where they are deployed.
For Model Management
The DataRobot “any model, anywhere” approach gives its MLOps tool the ability to deploy AI models to virtually any production environment — the cloud, on-premises, or hybrid.
It creates a model lifecycle management system that automates key processes, such as troubleshooting and triage, model approvals, and secure workflow. It can also handle model versioning and rollback, model testing, model retraining, and model failover and failback.
For Model Monitoring
This advanced tool from DataRobot provides instant visibility into the performance of hundreds of models, regardless of deployment location. It refreshes production models on a schedule over their full lifecycle or automatically when a specific event occurs. To support trusted AI, it even offers configurable bias monitoring.
Find Out More
Regulators and auditors are increasingly aware of the risks of poorly managed AI, and more stringent model risk management practices will soon be required.
Now is the time to address the gaps in your organization’s model management by adopting a robust new system. As a first step, download the latest DataRobot white paper, “What Risk Managers Need to Know about AI Governance,” to learn about our dynamic model management and monitoring solutions.