The Challenge with Corner Cases

All models are wrong, some are useful”, George Box

Attributed to a British statistician, this phrase is still being quoted almost 50 years later and has likely inspired aphorisms like “perfect is the enemy of good” and “80-20 rule” and “Better a diamond with a flaw than a pebble without”, Confucius?

Following up on that, perhaps just due to fear of publishing invalidated opinions still lingering from university, there are multiple examples accepting things as fact despite corner cases disproving the hypothesis or being so obscure no one previously considered them. We can speculate that no one ever pondered “what would happen if two black holes meet somewhere in the universe and decide to combine into one and we recorded it” but on September 14th, 2015, this exact scenario did occur and confirmed Einstein’s theory on gravitational waves beyond refute. Ironically even Einstein himself wasn’t that confident in his theory and would periodically change his position on the matter, yet anyone using GPS has benefited from his understanding of the time/space relationship – flaws an all.  So what does any of this have to do with security risk management in three words or less?

Strategic Business Alignment

One of my regular questions on the podcasts is how our guests help their clients identify the point of diminishing returns. Primarily due to my personal tendency to identify the handful of scenarios where a solution may not be effective, and I could use some ideas on accepting good versus the costly pursuit of perfection.  Revisiting “strategic business alignment”, as risk professionals we need to accept that organizations often embark down a path without a complete understanding of how they will handle everything that comes up. The concept of minimum viable product was a Silicon Valley darling for years, can partially inform the risk assessment model. The MVP model could be cynically described as: get people using your software, fix things that really are an issue after enough customers report them rather than agonize over every possible use case, and hopefully don’t run out of money before becoming profitable. The Agile Alliance does point out that settling for just enough that people will buy something doesn’t make a product viable in the long run and we see see that regularly with cracks in the cloud infrastructure, financial meltdowns an so on.

Back to “all models are wrong”, if we take the MVP concept seriously, part of our effort will be to analyze why things are not going as planned, ideally looking for the root cause rather than applying duct tape and pushing on with the next release or growth initiative just to keep things on schedule. Schedules are important, but the courage to miss a date for safety, quality, or some other darn good reason will be rewarded in the long run.

I love deadlines. I like whooshing sound they make as they fly by. Douglas Adams

I would be the first to point out that a business productivity application flaw will have far less significant impact on society than a flaw in water treatment, electrical generation or a piece of medical equipment but can we address flawed models via resilience? As risk professionals we often possess the uncanny ability to identify one or two scenarios that an existing or proposed control will fail to address.  At this point we have the choice of saying “this is unacceptable because …” or we can ask those that may no more about some aspects of the problem or the organziations capability to respond – appearantly even Einstein had his doubts and would speak with others in his field –.  

Posing a question like “if scenario one came to pass, what is the most credible and most extreme impact?” in a roomful of subject matter experts will most likely result in numerous lengthy responses, some contradictory, but themes tend to emerge. Most certainly watch for those extremes, I had dinner recently with a respected ICS security expert and completely agree with the position that some outcomes, no matter how unlikely, are too significant to knowingly leave to chance. If we as risk professionals identify such a scenario I believe we should resist “damn the torpedoes” with everything we have if professional ethics mean anything. That said, in most cases, the worst possible outcome may be highly undesirable but recoverable. There is a generational impact level difference between a cardboard box for a C-Suite member and a nuclear wasteland or polluted water.

Many corporate boards list “cyber security risk” in their top 5, and reviewing a firewall or application security log for five minutes will confirm the threat is very real. That said, I know of no business that has decided to shut everything down because “things are just to challenging these days“, ironically many are openly evaluating if machine learning, process automation and cloud computing can give them marketplace advantage.

In a business world where many run toward the fire instead of from it, can we help those we serve balance the many enterprise risks, not just cyber, to give them the greatest likelyhood of a successful outcome? Tim recently recommended a book on becoming a trusted advisor,  which includes a great deal of discussion on dealing with mistakes. Theoretically solving for all corner cases and missing the opportunity window ultimately doesn’t serve anyone.

 

 

One Human Error from Business Disruption at a National Scale?

Those listening to the podcast episodes month over month may notice a theme emerging, identifying and working toward protecting a path to operational resilience is typically what matters most to an organization. For the second year in a row the Caffeinated Risk Summer show coincided with a widespread outage of a major Canadian business. On July 8th 2022 Rogers Communications reported a national network outage that saw millions without cell or internet service and thousands of retailers without the ability to accept Interac payments. June 25th, 2023 Suncor Energy Inc issued a press release confirming a cyber security incident that was obviously light on details beyond customer record safety but ensuing speculation pegged the impact at millions.

While many in the Calgary I.T. community know each other, details on the exact cause of the Suncor incident remain, as they should, tightly held so this post is focused on the publically observable outcomes. The Rogers and Suncor incidents are similar in timing and impact, early summer and payment card system availability, and potentially initial cause human error. While Rogers admitted the network outage was due to a mistake in the planned upgrade procedures, we have no insight into the actual cause of the Suncor incident nor shall we speculate. Instead, we can look at published data trends and government intelligence to complete the threat model, as Jack Jones and Jack Freund maintained in their seminal risk management text, “we often have more data than we think“.

Cyberthreat Defense Report
Infographic – cyberthreat report highlights

The 2023 Cyberthreat Defense report was the basis for the Summer Show podcast and it is worth noting that the top two obstacles to cyberthreats were human factors. The Canadian Centre for Cyber Security lists numerous attack surface areas vulnerable to cyber threats including cyber crime. Cyber crime goes by various names such as phishing, ransomware, social engineering, business email compromise and so forth but the common element is a human inside the organization using the organization’s technology to engage with an adversarial force.

While some organizational leaders had been quick to assess human error as a staffing or skills issue, opting for ever more training and in some cases even threats of dismissal hopefully we are turning a corner on this legacy and rethinking our approach. ESRM takes a mission first focus on security prioritization focusing on business engagement and the Idaho National Labs CCE model has challenges us to look at each of those mission impacting scenarios, identify how cyber elements could play a part in disruption and reengineer around them. I am clearly a CCE fan, mentioning it on multiple episodes, buying copies of the book for my detection engineering teammates and sharing the program link with all unsuspecting folks who ask me about organizational resilience or operational technology security, but never mistake enthusiasm for the truth without testing. Whether that a software design flaw, process design flaw, or simply a stress induced cognitive error I believe we need to accept human error at some point in the system and design systems accordingly. The challenge of course is we cannot predict exactly where or how such errors will appear, therefore we need a different approach that “prevent everything” and “don’t screw up or your fired”.

The CCE book uses the term “hope and hygiene” as a failed security model often played out as compliance exercises, vulnerability scanning and simplistic user awareness training. Paraphrasing here, while such actions are important they neglect the time-tested reality that at some point in the future, a cyber related failure will happen, and the organization should be able to recover. The “all roads lead to Rome” idiom applied to resilience also shows up in the devops camps, very well summarized by luminary Mark Russinovich in a 2020 Microsoft blogpost and an off hand quote I overheard in an industry security summit this past winter who’s source shall remain anonymous due to subject sensitivity and my memory.

“Take a look at your network diagrams and all your maps of stuff. Close your eyes, put your thumb on something and say ‘XXXX now owns that’, and think through how you are going to get operations restored”

The digitalization genie is out of the bottle and we are increasingly dependent on interconnected supply chains, automation, cyber physical and virtualization systems for almost every aspect of our daily lives. This interconnectedness creates a list vulnerabilities that is approaching exponential, most of which will never come to pass, therefore identifying those key intersections of cyber element failure and cascading impacts become the brave new world security professionals must lead our organizations into. I am offering some awkward conversation starters, not as an affront to past leadership decisions but a chance to improve each of our security programs in meaningful ways going forward before we too fall victim.

  1. Much of our defense posture relies on Active Directory controls and privileged account protection measures, how would we rebuild if we lost control of the corporate domain?
  2. What would we do if an adversary re-encrypted all our backup systems and destroyed our active accounts databases?
  3. We have ensured more than 95% of our workstations and servers are running a top-tier endpoint detection and response product, what would we do if an adversary were able to unhook that process?
  4. What if there is a mistake in the next release of our custom system that we don’t pick up in UAT, how much could we stand to lose?
  5. How long can we operate if our main WAN provider is unavailable for more than 8 hours?
  6. How can we respond if an adversary takes control of our automated software installation platform to distribute their malware?

Admittedly these will not be easy conversations and every organization will need to do their own analysis. That said, let’s end this post on an optimistic note nothing is impossible once we are committed — even if it acceptance of loss. Consider the following:

  • There are many skilled and capable people working our ICT departments,
  • Cyber education is now mainstream, not a dark art,
  • Hardware and software quality is higher than it’s ever been while cost is going the other direction,
  • Organizations are investing in cyber security,
  • Rodgers did repair their nation wide outage in a couple days,
  • Interac did invest in network resilience,
  • Petro Canada point of sale services were restored in less than six days