What really caught my attention is how the bank’s biggest vulnerability wasn’t the cloud outage itself, but the hidden dependencies on a vendor that was misclassified as mid-tier. It really highlighted that third party risk isn not a technology problem; its a governance problem, and accountability cant be outsourced.
I wonder how can banks balance reliance on vendor data with their own independent verification? For example, should escalation authority and resilience testing always remain under the bank’s direct control, even for smaller vendors, or are there practical ways to manage this trade off without creating excessive costs?
The bank’s biggest problem wasn’t the cloud outage itself, but the fact that nobody actually understood how dependent the organization had become on a vendor classified as “mid-tier.” The incident shows how functional, informational, and even representational dependence can silently accumulate when tiering is based on spend instead of impact. What really surprised me was how many failures—slow escalation, misleading client messaging, unrealistic failover tests—all traced back to the same root cause: weak governance over the third party. It’s a good reminder that third-party risk isn’t about vendor technology; it’s about how well the bank manages the relationship.
We know risk extends to fourth parties and subcontractors, but where does the bank actually draw the line in practice? Even if a contract legally offloads the liability to a vendor, isn't the CRO still effectively on the hook if a critical service goes down? I’m trying to understand the gap between legal fault and actual operational accountability.
It has to go down all the way until the impact is deemed not material by management. Often this is managed through contractual restrictions on subcontracting. Legally, the loss may be recoverable from the third party, but the reputational losses remain with the FI.
In the lecture it was mentioned that for a third-party we should check their playbook (for resilience) and the fact that this is a two step process. What I'd like to know is that if this itself falls into our control system or not? We want to make sure that the impact from the third party failures will not destroy us so we set some controls to monitor and investigate if they are being careful about their part or not. Thanks in advance.
This class made me realize that third-party risk is often a governance problem disguised as a technology problem.
The case showed how a single cloud outage became a systemic failure because nobody mapped dependencies, nobody challenged the contract, and nobody verified the vendor’s own “green dashboards.”
I also found the 3 types of dependencies — functional, informational, representational — extremely useful. It explained why third-party risk often propagates in ways that internal teams never anticipate.
This session made it clear that a third-party framework can look mature on paper while failing completely in practice.
What really caught my attention is how the bank’s biggest vulnerability wasn’t the cloud outage itself, but the hidden dependencies on a vendor that was misclassified as mid-tier. It really highlighted that third party risk isn not a technology problem; its a governance problem, and accountability cant be outsourced.
I wonder how can banks balance reliance on vendor data with their own independent verification? For example, should escalation authority and resilience testing always remain under the bank’s direct control, even for smaller vendors, or are there practical ways to manage this trade off without creating excessive costs?
The bank’s biggest problem wasn’t the cloud outage itself, but the fact that nobody actually understood how dependent the organization had become on a vendor classified as “mid-tier.” The incident shows how functional, informational, and even representational dependence can silently accumulate when tiering is based on spend instead of impact. What really surprised me was how many failures—slow escalation, misleading client messaging, unrealistic failover tests—all traced back to the same root cause: weak governance over the third party. It’s a good reminder that third-party risk isn’t about vendor technology; it’s about how well the bank manages the relationship.
We know risk extends to fourth parties and subcontractors, but where does the bank actually draw the line in practice? Even if a contract legally offloads the liability to a vendor, isn't the CRO still effectively on the hook if a critical service goes down? I’m trying to understand the gap between legal fault and actual operational accountability.
In the lecture it was mentioned that for a third-party we should check their playbook (for resilience) and the fact that this is a two step process. What I'd like to know is that if this itself falls into our control system or not? We want to make sure that the impact from the third party failures will not destroy us so we set some controls to monitor and investigate if they are being careful about their part or not. Thanks in advance.
This class made me realize that third-party risk is often a governance problem disguised as a technology problem.
The case showed how a single cloud outage became a systemic failure because nobody mapped dependencies, nobody challenged the contract, and nobody verified the vendor’s own “green dashboards.”
I also found the 3 types of dependencies — functional, informational, representational — extremely useful. It explained why third-party risk often propagates in ways that internal teams never anticipate.
This session made it clear that a third-party framework can look mature on paper while failing completely in practice.