risk Published: Updated: By

What is SR 11-7?

Quick answer

SR 11-7 is a Federal Reserve supervisory letter, issued April 4, 2011, that sets the model risk management standard for U.S. banks. It requires independent model validation, documented assumptions, and board-level governance for every quantitative model used in decision-making. The OCC issued the same standard simultaneously as OCC Bulletin 2011-12. ---

The full answer

SR 11-7 is the Federal Reserve's supervisory guidance on Model Risk Management, issued April 4, 2011 and simultaneously published by the OCC as OCC Bulletin 2011-12. The full document is available at the Federal Reserve's SR Letters page. It's not a statute. Examiners treat it as the de facto industry standard anyway.

The guidance defines a model as "a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates." In 2011, that mostly meant credit scorecards and stress-testing tools. Today it includes transaction monitoring systems, customer risk-rating engines, fraud detection algorithms, and AI-based AML tools.

SR 11-7 structures model risk management around three requirements:

Model development and implementation. The team building a model must document the theoretical basis, data sources, assumptions, and limitations before deployment. Undocumented assumptions are among the most common exam findings.

Model validation. An independent team must test the model's conceptual soundness, performance on out-of-sample data, and real-world outcomes. Validators must have no stake in the model's success. Self-validation doesn't count. At large banks, this is a dedicated MRM function. At smaller institutions, it's often a third party.

Governance and controls. Institutions need a model inventory: a register of every in-scope model with purpose, risk tier, and validation status. Senior management owns the MRM policy. The board is expected to understand aggregate model risk, not just receive a one-page summary.

In 2021, the OCC confirmed via OCC Bulletin 2021-1 that SR 11-7 applies to AI and machine learning models, adding specific expectations around explainability, bias testing, and ongoing performance monitoring. All of that is grounded in the original 2011 framework. A bank using a machine learning model to decide which customers get enhanced due diligence is operating squarely within SR 11-7's scope.

SR 11-7 applies directly to state member banks and bank holding companies. The FDIC adopted parallel guidance through FIL-22-2017 for state non-member banks. National banks were already covered through OCC Bulletin 2011-12. Effectively, every federally regulated U.S. bank operates under the same standard.

Why this matters

SR 11-7 is the most-cited regulatory framework in model-related enforcement actions. For compliance teams deploying any algorithmic tool, understanding it isn't optional.

AML and fraud models are in scope. Transaction monitoring, customer risk scoring, SAR triage, fraud detection: all are models under the SR 11-7 definition. What percentage of AML alerts are false positives? is a model performance question. Regulators expect a documented answer, with validation data behind it.

Exam triggers are real. Examiners ask for model inventories as part of BSA/AML and safety-and-soundness exams. A missing or incomplete inventory can trigger a full regulatory exam. Weak governance on a high-risk model can turn an exam into a consent order.

The consequences compound. When a bank fails an AML exam, model governance failures appear consistently in findings. In the most severe cases, persistent failures contribute to monitorships, which cost $10 million or more per year and run for three to five years.

AI raises the stakes. A model that can't be explained to an examiner fails the SR 11-7 explainability standard. "The algorithm flagged it" isn't an acceptable answer during a BSA exam. Every decision needs a documented rationale. The EU AI Act is heading in the same direction for European institutions, with high-risk AI systems requiring comparable documentation and audit trails.

Three practical steps for compliance teams:

  1. Audit your model inventory. Every system that uses a quantitative method to produce estimates or decisions belongs on it. If it scores customers, flags transactions, or prices risk, it's almost certainly a model.
  2. Verify validation independence. Developers validating their own models is a finding waiting to happen. Assign validation to a separate function or bring in a third party.
  3. Document assumptions before deployment, not after. Examiners read original development documentation. Retrofitted documentation is obvious.

How often customer risk ratings should be refreshed is an MRM question as much as a KYC question. The model producing those ratings needs its own validation cycle.

Related questions

Related concepts and regulations


← All compliance questions