Looking at this from the perspective of controls, the biggest failure was the collapse of independent validation. The validation team explicitly flagged missing documentation and conceptual weaknesses, yet the model was pushed into production anyway. What I found most interesting is that once challenge was ignored, every other control such as monitoring, change control, even version tracking became meaningless.
Since we know that risks like proxy bias and model drift are basically unavoidable even with controls, how should a Model Risk team actually report that residual risk? Since we can't reduce it to zero, does it make sense to use a qualitative limit, or should we apply something like the residual risk heat maps we use in Op Risk?
The bank treated VaR as a storytelling tool rather than a risk measure, and that mindset drove every failure that followed: assumptions were “optimized,” validation was overridden, monitoring signals were rationalized, and no one felt accountable for challenging a 40% drop in reported risk.
To me, the lesson is that model risk is fundamentally organizational risk. Even a technically sound model becomes dangerous when incentives reward lower numbers, escalation paths can be bypassed, and no one owns the responsibility to say “this doesn’t make sense.” The failure didn’t start with the code—it started with the culture.
This was my favourite lecture, as I worked in Model Validation at RBC. At RBC, I enjoyed doing the quantitative work, but I did not understand the significance of my work. The front office quants had developed the model to price the bespoke derivatives. At RBC, multiple teams conducted independent model validation and valuations. I did not understand the point of this until I read this week's case on model risk. The most important takeaway from the lecture was Segregation: those who benefit from the model output should not be allowed to self-approve changes to the model. At RBC front office quant models had to be validated not only from Model Validation in Risk but also Valuations in Product Control.…
This class made it clear that model risk is rarely about the math — it’s about behavior, governance, and incentives.
The VaR Mirage case showed how a model can be “performing as designed” and still cause billions in losses because the real failure was in why it was built, how it was approved, and how people responded once it started breaking.
I found the idea that “better modeling” can actually hide risk instead of revealing it extremely powerful. It reminded me that the second line’s job is not to redo the math, but to challenge assumptions, incentives, shortcuts, and governance.
Looking at this from the perspective of controls, the biggest failure was the collapse of independent validation. The validation team explicitly flagged missing documentation and conceptual weaknesses, yet the model was pushed into production anyway. What I found most interesting is that once challenge was ignored, every other control such as monitoring, change control, even version tracking became meaningless.
Since we know that risks like proxy bias and model drift are basically unavoidable even with controls, how should a Model Risk team actually report that residual risk? Since we can't reduce it to zero, does it make sense to use a qualitative limit, or should we apply something like the residual risk heat maps we use in Op Risk?
The bank treated VaR as a storytelling tool rather than a risk measure, and that mindset drove every failure that followed: assumptions were “optimized,” validation was overridden, monitoring signals were rationalized, and no one felt accountable for challenging a 40% drop in reported risk.
To me, the lesson is that model risk is fundamentally organizational risk. Even a technically sound model becomes dangerous when incentives reward lower numbers, escalation paths can be bypassed, and no one owns the responsibility to say “this doesn’t make sense.” The failure didn’t start with the code—it started with the culture.
This was my favourite lecture, as I worked in Model Validation at RBC. At RBC, I enjoyed doing the quantitative work, but I did not understand the significance of my work. The front office quants had developed the model to price the bespoke derivatives. At RBC, multiple teams conducted independent model validation and valuations. I did not understand the point of this until I read this week's case on model risk. The most important takeaway from the lecture was Segregation: those who benefit from the model output should not be allowed to self-approve changes to the model. At RBC front office quant models had to be validated not only from Model Validation in Risk but also Valuations in Product Control.…
This class made it clear that model risk is rarely about the math — it’s about behavior, governance, and incentives.
The VaR Mirage case showed how a model can be “performing as designed” and still cause billions in losses because the real failure was in why it was built, how it was approved, and how people responded once it started breaking.
I found the idea that “better modeling” can actually hide risk instead of revealing it extremely powerful. It reminded me that the second line’s job is not to redo the math, but to challenge assumptions, incentives, shortcuts, and governance.