The three MECE categories of controls, Security, Segregation, and Monitoring, provide a neat and comprehensive structure. It’s interesting how each addresses a distinct failure mode: keeping the wrong actors out, preventing misuse of power, and continuously verifying outcomes. Together they form a complete ecosystem for managing operational risk. I find it fascinating that the ultimate goal of controls ties back to Risk Appetite, that controls don’t exist for their own sake but to keep losses within acceptable limits. That’s such an elegant link between day-to-day operational design and strategic risk governance.
It’s relatively clear in theory how inherent risk is reduced to residual risk through controls, and how we then compare it with risk appetite. But in practice, the challenge is demonstrating that residual risk has truly decreased to an acceptable level. I am wondering how do we verify that the control effectiveness we assume actually holds in the real world?
It is straightforward to understand that how do we reduce inherent risk to residual risk. Then, compare with risk appetite and determine whether to enhance control or reduce exposure. However, how do we prove that the residual risk is in the real world is decreased to certain level that stakeholders are acceptable? If we follow the procedure in the assignment 4, how to we justify VH, H, M, etc.
During Assignment 4, we had a few questions when applying the notes. First, we were not sure how to get an overall score for control adequacy. Since each category may has several controls, should we calculate a weighted average? And if so, how should the weights be assigned—by importance, coverage, or effectiveness? Second, we were unsure how to identify the changes in frequency and severity after applying controls. Which controls reduce frequency and which controls reduce severity, and how do we measure the shift. Should this be done with qualitative judgment, or should we use a scoring adjustment system?
From the explanation given, I understand that some controls stop problems from happening while others minimize the damage once they occur. But when looking at a specific control in practice, what criteria can we use to determine whether it’s meant to limit frequency or limit severity?
It depends on what the controls do. For example, one that detects and blocks a fraudulent transaction from going thru reduces the frequency of losses from fraudulent transactions. One that requires a transaction above a threshold, say $1mm, to be flagged for review and approval reduces the severity. In trading, limits reduce the severity of loss.
The three MECE categories of controls, Security, Segregation, and Monitoring, provide a neat and comprehensive structure. It’s interesting how each addresses a distinct failure mode: keeping the wrong actors out, preventing misuse of power, and continuously verifying outcomes. Together they form a complete ecosystem for managing operational risk. I find it fascinating that the ultimate goal of controls ties back to Risk Appetite, that controls don’t exist for their own sake but to keep losses within acceptable limits. That’s such an elegant link between day-to-day operational design and strategic risk governance.
It’s relatively clear in theory how inherent risk is reduced to residual risk through controls, and how we then compare it with risk appetite. But in practice, the challenge is demonstrating that residual risk has truly decreased to an acceptable level. I am wondering how do we verify that the control effectiveness we assume actually holds in the real world?
It is straightforward to understand that how do we reduce inherent risk to residual risk. Then, compare with risk appetite and determine whether to enhance control or reduce exposure. However, how do we prove that the residual risk is in the real world is decreased to certain level that stakeholders are acceptable? If we follow the procedure in the assignment 4, how to we justify VH, H, M, etc.
During Assignment 4, we had a few questions when applying the notes. First, we were not sure how to get an overall score for control adequacy. Since each category may has several controls, should we calculate a weighted average? And if so, how should the weights be assigned—by importance, coverage, or effectiveness? Second, we were unsure how to identify the changes in frequency and severity after applying controls. Which controls reduce frequency and which controls reduce severity, and how do we measure the shift. Should this be done with qualitative judgment, or should we use a scoring adjustment system?
From the explanation given, I understand that some controls stop problems from happening while others minimize the damage once they occur. But when looking at a specific control in practice, what criteria can we use to determine whether it’s meant to limit frequency or limit severity?