What is model risk management?
Quick answer
Model risk management is the practice of identifying, measuring, and controlling risks from decisions made using quantitative models. A flawed model driving credit, fraud, or AML decisions creates regulatory and financial exposure. U.S. banks follow SR 11-7, issued jointly by the Federal Reserve and OCC in 2011. ---
The full answer
Most banks have a clear picture of their core credit models. What's hazier is the transaction monitoring system purchased from a vendor three years ago, or the customer risk-scoring algorithm the AML team recalibrated last quarter. Model risk management is the discipline that covers all of them.
SR 11-7, published jointly by the Federal Reserve and OCC in April 2011, is the foundational U.S. standard. It defines a model as any quantitative method that applies statistical, economic, or mathematical techniques to process input data into estimates that inform decisions. The bar is deliberately low. A credit scorecard in Excel is a model. So is a neural network processing 10 million transactions a day.
SR 11-7 identifies three distinct categories of model failure:
- Wrong inputs or assumptions: Data quality problems, poorly chosen variables, assumptions that don't hold in practice
- Implementation errors: Bugs in code, integration mistakes, environment differences between testing and production
- Misuse: Applying a model outside its intended scope, or relying on outputs without understanding the model's limitations
Every formal MRM program requires three elements in response: standards for model development and implementation; independent validation by people who had no role in building the model; and ongoing governance, including a maintained inventory, defined model owners, and live performance monitoring with triggers for re-validation.
Vendor models don't get a pass. OCC Bulletin 2011-12 makes this explicit. Buying a model from a third party doesn't transfer the validation obligation. The bank owns the risk the model creates.
The UK's PRA updated its expectations in May 2023 with SS1/23. The five-principle framework follows SR 11-7 in substance but adds board-level accountability explicitly. It's not enough for a risk team to have model risk policies; the board must set model risk appetite as a formal governance limit.
The EU AI Act (Regulation 2024/1689) creates a parallel layer for AI-based models in high-risk categories. AML monitoring, fraud detection, and credit scoring all qualify. The Act takes effect in phases: providers of high-risk AI systems face obligations from August 2026, while financial institutions deploying those systems have until August 2027. Banks operating across the Atlantic now run two governance frameworks for the same models.
Why this matters
AML and fraud operations run on models. Customer risk ratings, transaction monitoring thresholds, beneficial ownership scores, SAR prioritization: all of it depends on quantitative systems making decisions at machine speed.
AI-powered transaction monitoring creates a validation problem that rules-based systems didn't. An examiner reviewing a rules-based monitor can read the rules. Reviewing a machine learning model requires documentation of training data, feature selection rationale, back-testing results, and ongoing performance tracking cadence. If the bank can't produce that documentation, the model is an audit finding. Accurate outputs don't cure the governance gap.
False positive rates are a model performance metric. When 95% or more of AML alerts are false positives, the usual root cause is a model that was poorly calibrated, never re-validated after market conditions changed, or applied outside its intended scope. That's a model risk failure, not just an operational one.
Model drift is the monitoring problem that bites hardest. A transaction monitoring model validated in 2021 learned from pre-pandemic transaction patterns. Behavior shifted. Criminal typologies shifted. If performance metrics aren't tracked against validation-time benchmarks, the model degrades without signal until a regulatory exam reveals the gap.
Perpetual KYC systems that update customer risk scores continuously are, under SR 11-7, a model or a system of models. Banks adopting pKYC without MRM governance in place are building a future audit finding.
Regulators treat MRM failures as a separate category from AML failures. A bank can have a functional compliance program and still receive Matters Requiring Attention for weak model validation. The same escalation path follows MRM deficiencies as AML ones: MRAs, consent orders, and in serious cases, restrictions on asset growth. See what happens when a bank fails an AML exam.
We've seen banks fined not because their models produced wrong outcomes, but because they couldn't demonstrate they'd ever validated them properly.
Related questions
- Can AI be used for AML transaction monitoring?
- What percentage of AML alerts are false positives?
- How often should customer risk ratings be refreshed?
- What triggers a regulatory exam?
- What happens when a bank fails an AML exam?
Related concepts and regulations
- Who needs to comply with the EU AI Act?
- When does the EU AI Act take effect?
- What is perpetual KYC?
- What is the difference between CDD and EDD?