Question: Your cyber risk management program is effective at proactively
finding risks.
Answer: Effective / Not Effective. (circle one)
How do you know? What is being measured?
After all, it’s a key program for proactively identifying
cyber risks.
Lots of resources, and frameworks, and effort.
The same expectations of any other program with that size
and scale.
But no obvious formalized way or feedback loop to evaluate,
measure, or compare just how good that proactive risk identification is.
Wait, what?!?
Maybe teams don't want to know how effective they are. Does the question even matter? Or is the question just not often asked? So many perplexing follow-on questions in my head.
In my thinking, the risks identified outside of the cyber
risk management process that didn’t find their way into the risk register would
seem to be as significant as a software defect not caught by the QA team.
Some potentially serious root causes as to why were those
missed. by the cyber risk program:
Training issue? Process hole?
Lack of resources?
Or, just the historically comforting knowledge that they
aren’t tracked, goaled, or owned?
I’m feeling like this should be important or that I have
missed a key concept someplace…
….particularly with the resources and effort involved.
How do you demonstrate that your cyber risk management
program is effective for the resources and effort you've put into it?
Join the discussion at #crazygoodcyberteams on twitter or Linkedin . Alo, follow me on Twitter
for discussion and the latest blog updates: @Opinionatedsec1.
SEE ALSO