What you should be able to Do
- Anthony Peccia

- Sep 3, 2025
- 1 min read
Updated: Oct 30, 2025
What “What You Should Be Able to Do” Really Means
This isn’t a test. It’s your personal dashboard. After each class, you’ll see a short list of actions a strong risk manager should be able to pull off with the tools we just used. Your job is to check yourself: can you actually do these things on your own? If yes — excellent, keep pushing. If not — that’s gold, because you’ve just found where to focus. We’ll use those gaps in the next class to sharpen your skills even further.
In the real world I assume we would probably use a combination of intuition along with quantitive methods (data) to calculate residual risk.
Me and my brother both worked in tech and noticed that (in my opinion) a majority of the time people would make purely data-driven decisions it would lead to bad outcomes. For example, creating a feature roadmaps purely based on A/B tests. Instead of looking at the broader direction of where the company needs to go and what users want.
This led to a lot of resources going into features that didn't move the needle at all or in some cases caused actual negative responses that deviated from the companies core mission.
The only thing I noticed…
The real takaway for creation-execution-refit structure is about the business service not the server. Creation asks whether weve actually defined what must stay alive, how much disruption customers and markets can tolerate, and what fallbacks (manual workarounds, alternative vendors, overlays) exist before anything breaks. Execution is the moment of truth about when something fails in a messy way, can we detect it quickly, escalate to the right people, and keep the service running in some safe, if degraded, form instead of just “rebooting systems”? Refit then forces us to turn every incident into a design change—updating architecture, contracts, and playbooks, so we don’t fight the same fire twice. Concluding it with a question for the CRO in real life is when…
Class 8 tied together the entire framework elegantly.Diagnosing the incident using CIA helped pinpoint what actually failed, while mapping to the 4A controls made the control gaps visible.
Layering E·C·R·G on top showed how governance and resilience amplified the breach, especially the missing audit trails, outdated playbooks, and siloed response.
It was the clearest demonstration that cybersecurity is not a technical specialty — it’s operational risk with a different surface area.
Exposure is really about understanding where we are vulnerable, but the human side makes this harder than it seems. People get used to how things normally work, so they stop noticing small signs that something might not be right. When we say we are “mapping exposures,” what we are really doing is trying to see the parts of the business that we have stopped questioning. Every team has routines, shortcuts, and assumptions that feel safe because they have worked before. That is why exposure is often hidden in places that look familiar or stable. The real challenge is being willing to ask simple but uncomfortable questions about how things actually work today, not how we believe they work. In that…
What really stands out to me in this case is how the whole problem didn’t come from one big mistake, but from a bunch of small decisions that all leaned in the same direction: “make the numbers look better.” Everyone seemed relieved when the new VaR looked cleaner and lower, and once that happened, nobody wanted to slow things down by asking harder questions. It’s interesting — the model itself wasn’t the real issue. The real issue was how quickly people accepted a result that matched what they already wanted to see. The culture rewarded the outcome, not the process, so checks and challenges naturally faded into the background. It’s a good reminder that risk management doesn’t fail loudly; it…