What s Model Risk and Why Does it Matter BKG

What is Model Risk and Why Does it Matter?

April 29, 2022
by
· 6 min read

With the big data revolution of recent years, predictive models are being rapidly integrated into more and more business processes. This provides a great amount of benefit, but it also exposes institutions to greater risk and consequent exposure to operational losses. When business decisions are made based on bad models, the consequences can be severe. The stakes in managing model risk are at an all-time high, but luckily automated machine learning provides an effective way to reduce these risks.

Prior to the financial crisis of 2008, Model Risk Management within the financial services industry was driven by industry best practices rather than regulatory standards(which brings to mind the saying “a fox guarding the hen house”). However, after the financial crisis, financial regulators around the world stepped up to the challenge of reigning in model risk across the financial industry. 

In 2011, the Federal Reserve Board (FRB) and the Office of Comptroller of the Currency (OCC) issued a joint regulation specifically targeting Model Risk Management (respectively, SR 11-7 and OCC Bulletin 2011-12). This regulation laid the foundation for assessing model risk for financial institutions around the world, but was initially targeted towards Systemically Important Financial Institutions (SIFIs), which were deemed by the government to be “too big to fail” during the Great Recession.

In 2017, additional regulation targeted much smaller financial institutions in the U.S. when the Federal Deposit Insurance Commission (FDIC) announced its adoption of Supervisory Guidance on Model Risk Management, previously outlined by the FRB and OCC. The FDIC’s action was announced through a Financial Institution Letter, FIL-22-2017. The new regulation greatly reduced the minimum threshold for compliance for banks from $50 billion to $1 billion in assets. This regulation required large capital investments from regional and community banks to ensure alignment to regulatory expectations–something that the SIFIs had a very long head start on. 

Recently, Stanford University released its 2022 AI Index Annual Report, where it showed between 2016 and 2021, the number of bills containing artificial intelligence grew from 1 to 18 in 25 countries. Among these, Spain, the United Kingdom, and the United States passed the highest number of AI-related bills in 2021 adopting three each. As machine learning advances globally, we can only expect the focus on model risk to continue to increase.

The growing attention around regulation leads us to assess the concept of “model risk.” You might be thinking what is model risk, and how can it be mitigated? This is a complicated question, but before we dive in to model risk, I have another simpler question that must be answered first. What is a model? The regulators have provided a universal definition that has been adopted across the financial industry. They define a model to be “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.

The main components of a model as defined by banking industry regulators.
Figure 1: The main components of a model as defined by banking industry regulators.

Therefore, if a process includes inputs, calculations, and outputs then it falls under the regulatory classification of a model. This is a broad definition, but since the intent was to mitigate model risk, a broad definition of a model was established to maximize the impact of the regulation. If there is any doubt on the classification of a process, regulators wanted to encourage financial institutions to err on the side of “model.” 

With the definition of a model now in place, the regulation next defined model risk as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.” In other words, model risk can lead to tangible losses for the bank and its shareholders. Regardless of where a institution is using a model in their enterprise, model risk primarily occurs for two reasons: 

  1. A model may have been built as it was intended but could have fundamental errors and produce inaccurate outputs when compared to its design objective and intended use.
  2. A model may be used incorrectly or inappropriately, or its limitations or assumptions may not be fully understood. 

The need for an effective Model Risk Management (MRM) framework can be demonstrated with countless case studies of recent MRM failures. For example, Long Term Capital Management was a large hedge fund led by Nobel laureates of economics and world class traders but ultimately failed due to unmitigated model risk. In another example, a large global bank’s misuse of a model caused billions of dollars in trading losses. The details of these examples are often the topic of business school case studies and debate, but there is no arguing that model risk is very real and must be managed. But how? 

The FDIC’s regulation can be broken down into three main components used to manage model risk:

  • Model Development, Implementation, and Use – The initial responsibility to manage model risk is on those developing, implementing, and using the models. Model development relies heavily on the experience and judgment of developers, and model risk management should include disciplined model development and implementation processes that align with the model governance and control policies. 
  • Model Validation – Prior to the use of a model (i.e., production deployment), it must be reviewed by an independent group—model validation. Model validation is the set of processes and activities intended to independently verify that models are performing as expected, in line with their design objectives and business uses. The model validation process is intended to provide an effective challenge to each models’ development, implementation, and use. The model validation process is crucial to effectively identify and manage model risk.
  • Model Governance, Policies, and Controls – Strong governance provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for testing that policies and procedures are being carried out as specified. This governance includes tracking the status of each model on an inventory across the entire enterprise.  

Initial alignment to these regulatory requirements required SIFI banks to invest millions of dollars to build new processes and teams, and now that same burden lies with community and regional institutions. It is impossible to overemphasize the need for an institution to have sufficient model governance, policies, and controls. Regardless of the technology at the disposal of the model developers or model validators, there is no replacement for a sound model governance process. But isn’t there a more efficient way to use technology to reduce model risk, while increasing the transparency and auditability of the model development, implementation, and use process?  A purposeful MLOps strategy can provide exactly this.

Traditional model development methods are time-consuming, tedious, and subject to user error and bias. Instead of manually coding steps (such as variable selection, data partitioning, model performance testing, model tuning, and so on), best practices can be automated through the use of automated machine learning and guard rails can be implemented when combined with an MLOps strategy. Automated machine learning plus MLOps allows for easy replication of the model development process, which gives model validators more time to independently assess and review the model and its potential limitations and ultimately drives value for the validation process. MLOps provides the necessary guard rails, documentation, monitoring, and approval processes that are needed for security and audit. 

The new field of MLOps offers a much stronger framework for model validation, documentation, and oversight than traditional manual efforts, while more closely aligning to the ever increasing regulatory requirements and vastly reducing “model risk.”

More on this topic

Ebook
The Framework for ML Governance
Download now
About the author
Diego Oppenheimer
Diego Oppenheimer

Executive Vice President of MLOps, DataRobot

Diego Oppenheimer is the EVP of MLOps at DataRobot, and previously was co-founder and CEO of Algorithmia, the enterprise MLOps platform, where he helped organizations scale and achieve their full potential through machine learning. After Algorithmia was acquired by DataRobot in July 2021, he has continued his drive for getting ML models into production faster and more cost-effectively with enterprise-grade security and governance. He brings his passion for data from his time at Microsoft where he shipped Microsoft’s most used data analysis products including Excel, Power Pivot, SQL Server, and Power BI. Diego holds a Bachelor’s degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.

Meet Diego Oppenheimer
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Thank you

    We will contact you shortly

    Thank You!

    We’re almost there! These are the next steps:

    • Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
    • Click the confirmation link to approve your consent.
    • Done! You have now opted to receive communications about DataRobot’s products and services.

    Didn’t receive the email? Please make sure to check your spam or junk folders.

    Close

    Newsletter Subscription
    Subscribe to our Blog