The Challenge with Corner Cases

All models are wrong, some are useful”, George Box

Attributed to a British statistician, this phrase is still being quoted almost 50 years later and has likely inspired aphorisms like “perfect is the enemy of good” and “80-20 rule” and “Better a diamond with a flaw than a pebble without”, Confucius?

Following up on that, perhaps just due to fear of publishing invalidated opinions still lingering from university, there are multiple examples accepting things as fact despite corner cases disproving the hypothesis or being so obscure no one previously considered them. We can speculate that no one ever pondered “what would happen if two black holes meet somewhere in the universe and decide to combine into one and we recorded it” but on September 14th, 2015, this exact scenario did occur and confirmed Einstein’s theory on gravitational waves beyond refute. Ironically even Einstein himself wasn’t that confident in his theory and would periodically change his position on the matter, yet anyone using GPS has benefited from his understanding of the time/space relationship – flaws an all.  So what does any of this have to do with security risk management in three words or less?

Strategic Business Alignment

One of my regular questions on the podcasts is how our guests help their clients identify the point of diminishing returns. Primarily due to my personal tendency to identify the handful of scenarios where a solution may not be effective, and I could use some ideas on accepting good versus the costly pursuit of perfection.  Revisiting “strategic business alignment”, as risk professionals we need to accept that organizations often embark down a path without a complete understanding of how they will handle everything that comes up. The concept of minimum viable product was a Silicon Valley darling for years, can partially inform the risk assessment model. The MVP model could be cynically described as: get people using your software, fix things that really are an issue after enough customers report them rather than agonize over every possible use case, and hopefully don’t run out of money before becoming profitable. The Agile Alliance does point out that settling for just enough that people will buy something doesn’t make a product viable in the long run and we see see that regularly with cracks in the cloud infrastructure, financial meltdowns an so on.

Back to “all models are wrong”, if we take the MVP concept seriously, part of our effort will be to analyze why things are not going as planned, ideally looking for the root cause rather than applying duct tape and pushing on with the next release or growth initiative just to keep things on schedule. Schedules are important, but the courage to miss a date for safety, quality, or some other darn good reason will be rewarded in the long run.

I love deadlines. I like whooshing sound they make as they fly by. Douglas Adams

I would be the first to point out that a business productivity application flaw will have far less significant impact on society than a flaw in water treatment, electrical generation or a piece of medical equipment but can we address flawed models via resilience? As risk professionals we often possess the uncanny ability to identify one or two scenarios that an existing or proposed control will fail to address.  At this point we have the choice of saying “this is unacceptable because …” or we can ask those that may no more about some aspects of the problem or the organziations capability to respond – appearantly even Einstein had his doubts and would speak with others in his field –.  

Posing a question like “if scenario one came to pass, what is the most credible and most extreme impact?” in a roomful of subject matter experts will most likely result in numerous lengthy responses, some contradictory, but themes tend to emerge. Most certainly watch for those extremes, I had dinner recently with a respected ICS security expert and completely agree with the position that some outcomes, no matter how unlikely, are too significant to knowingly leave to chance. If we as risk professionals identify such a scenario I believe we should resist “damn the torpedoes” with everything we have if professional ethics mean anything. That said, in most cases, the worst possible outcome may be highly undesirable but recoverable. There is a generational impact level difference between a cardboard box for a C-Suite member and a nuclear wasteland or polluted water.

Many corporate boards list “cyber security risk” in their top 5, and reviewing a firewall or application security log for five minutes will confirm the threat is very real. That said, I know of no business that has decided to shut everything down because “things are just to challenging these days“, ironically many are openly evaluating if machine learning, process automation and cloud computing can give them marketplace advantage.

In a business world where many run toward the fire instead of from it, can we help those we serve balance the many enterprise risks, not just cyber, to give them the greatest likelyhood of a successful outcome? Tim recently recommended a book on becoming a trusted advisor,  which includes a great deal of discussion on dealing with mistakes. Theoretically solving for all corner cases and missing the opportunity window ultimately doesn’t serve anyone.

 

 

One Human Error from Business Disruption at a National Scale?

Those listening to the podcast episodes month over month may notice a theme emerging, identifying and working toward protecting a path to operational resilience is typically what matters most to an organization. For the second year in a row the Caffeinated Risk Summer show coincided with a widespread outage of a major Canadian business. On July 8th 2022 Rogers Communications reported a national network outage that saw millions without cell or internet service and thousands of retailers without the ability to accept Interac payments. June 25th, 2023 Suncor Energy Inc issued a press release confirming a cyber security incident that was obviously light on details beyond customer record safety but ensuing speculation pegged the impact at millions.

While many in the Calgary I.T. community know each other, details on the exact cause of the Suncor incident remain, as they should, tightly held so this post is focused on the publically observable outcomes. The Rogers and Suncor incidents are similar in timing and impact, early summer and payment card system availability, and potentially initial cause human error. While Rogers admitted the network outage was due to a mistake in the planned upgrade procedures, we have no insight into the actual cause of the Suncor incident nor shall we speculate. Instead, we can look at published data trends and government intelligence to complete the threat model, as Jack Jones and Jack Freund maintained in their seminal risk management text, “we often have more data than we think“.

Cyberthreat Defense Report
Infographic – cyberthreat report highlights

The 2023 Cyberthreat Defense report was the basis for the Summer Show podcast and it is worth noting that the top two obstacles to cyberthreats were human factors. The Canadian Centre for Cyber Security lists numerous attack surface areas vulnerable to cyber threats including cyber crime. Cyber crime goes by various names such as phishing, ransomware, social engineering, business email compromise and so forth but the common element is a human inside the organization using the organization’s technology to engage with an adversarial force.

While some organizational leaders had been quick to assess human error as a staffing or skills issue, opting for ever more training and in some cases even threats of dismissal hopefully we are turning a corner on this legacy and rethinking our approach. ESRM takes a mission first focus on security prioritization focusing on business engagement and the Idaho National Labs CCE model has challenges us to look at each of those mission impacting scenarios, identify how cyber elements could play a part in disruption and reengineer around them. I am clearly a CCE fan, mentioning it on multiple episodes, buying copies of the book for my detection engineering teammates and sharing the program link with all unsuspecting folks who ask me about organizational resilience or operational technology security, but never mistake enthusiasm for the truth without testing. Whether that a software design flaw, process design flaw, or simply a stress induced cognitive error I believe we need to accept human error at some point in the system and design systems accordingly. The challenge of course is we cannot predict exactly where or how such errors will appear, therefore we need a different approach that “prevent everything” and “don’t screw up or your fired”.

The CCE book uses the term “hope and hygiene” as a failed security model often played out as compliance exercises, vulnerability scanning and simplistic user awareness training. Paraphrasing here, while such actions are important they neglect the time-tested reality that at some point in the future, a cyber related failure will happen, and the organization should be able to recover. The “all roads lead to Rome” idiom applied to resilience also shows up in the devops camps, very well summarized by luminary Mark Russinovich in a 2020 Microsoft blogpost and an off hand quote I overheard in an industry security summit this past winter who’s source shall remain anonymous due to subject sensitivity and my memory.

“Take a look at your network diagrams and all your maps of stuff. Close your eyes, put your thumb on something and say ‘XXXX now owns that’, and think through how you are going to get operations restored”

The digitalization genie is out of the bottle and we are increasingly dependent on interconnected supply chains, automation, cyber physical and virtualization systems for almost every aspect of our daily lives. This interconnectedness creates a list vulnerabilities that is approaching exponential, most of which will never come to pass, therefore identifying those key intersections of cyber element failure and cascading impacts become the brave new world security professionals must lead our organizations into. I am offering some awkward conversation starters, not as an affront to past leadership decisions but a chance to improve each of our security programs in meaningful ways going forward before we too fall victim.

  1. Much of our defense posture relies on Active Directory controls and privileged account protection measures, how would we rebuild if we lost control of the corporate domain?
  2. What would we do if an adversary re-encrypted all our backup systems and destroyed our active accounts databases?
  3. We have ensured more than 95% of our workstations and servers are running a top-tier endpoint detection and response product, what would we do if an adversary were able to unhook that process?
  4. What if there is a mistake in the next release of our custom system that we don’t pick up in UAT, how much could we stand to lose?
  5. How long can we operate if our main WAN provider is unavailable for more than 8 hours?
  6. How can we respond if an adversary takes control of our automated software installation platform to distribute their malware?

Admittedly these will not be easy conversations and every organization will need to do their own analysis. That said, let’s end this post on an optimistic note nothing is impossible once we are committed — even if it acceptance of loss. Consider the following:

  • There are many skilled and capable people working our ICT departments,
  • Cyber education is now mainstream, not a dark art,
  • Hardware and software quality is higher than it’s ever been while cost is going the other direction,
  • Organizations are investing in cyber security,
  • Rodgers did repair their nation wide outage in a couple days,
  • Interac did invest in network resilience,
  • Petro Canada point of sale services were restored in less than six days

Design Thinking & Security Controls

During the green room chat before our first podcast episode, Rachelle Loyear and I began discussing “Design Thinking”. My personal experience was limited to a one day workshop with IBM a couple years back, but as you can hear in the podcast this is an area Rachelle has spent a lot of time exploring. Over the years I have worked with many high quality security products and for the most part the user experience,(UX), almost always felt like an after thought. This is not a slight against the companies that work very hard to bring us these products, and as techies we tend to want large volumes of information and lots of buttons on every screen but a recent personal experience has given me real cause for reflection on UX design, even for security tools.

The basic understanding of “Design Thinking” is best summed up in a quote from IDEO, the company that brought many of the current practices in this methodology forward over the last two decades.

Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.” Tim Brown, Executive Chair of IDEO

Do read Tim Brown’s books for the full understanding but essentially a good design will be desirable from a human perspective, technologically feasible  and economically viable for the company. While most people think of products as design candidates, software applications have certainly adopted this focus on UX and a similar design thinking approach can be applied to services as discussed in “This is Service Design Thinking” by Marc Stickdorn

Venn diagram showing the intersection of FEASIBILITY 
DESIRABILITY and VIABILITY

The “3I” model of “Inspiration, Ideation and implementation” was developed by IDEO 20 years ago and the definitions below are a bit of a composite of the various interpretations offered by different service providers.

Inspiration: Identifying the problem or opportunity that warrants a solution, primarily through considering the actual user of the product or service and the challenges they are facing with current offerings.

Ideation: This goes deeper than just brainstorming, promising ideas are further assessed  with a multi-disciplinary team to develop fleshed out conceptual solutions, the best candidates can then be turned into testable prototypes.

Implementation: Prototypes are developed and tested with users,  user feedback drives updated prototypes until finally moving from prototype to amarket place offering

The main premise behind design thinking is an interdisciplinary group working on a problem at the outset will develop new solutions that are more innovative and more likely to succeed than traditional R&D models. The challenge to companies more comfortable with the engineering followed by design  approach is there is no one best way to move through the process and it may appear chaotic at times but the approach is much more mainstream now than when IDEO first started two decades ago. How we can help as security practitioners is to work within our organizations to ensure we are part of that interdisciplinary team when we hear terms like “design thinking” and “agile” associated with new projects. Without security specialists involved in these design and delivery activities the trend of last minute addon compromises is likely to continue.

To move this conversation from the academic to real life one could start with a review of Tim Brown’s short blog post on Empathy, which is one of the inputs into the inspiration phase listed above. Putting yourself in the position of the person interacting with a product or service. The empathy concept became very real recently when my partner, a physical therapy student, was required to spend 24 hours without the use of her dominant arm and perform day to day activities. Where feasible I supported by also forgoing the use of my right arm which made the challenges of modern computer security controls immediately obvious.

Long complex passwords are potentially impossible to type if a person has dexterity limits, for example try reaching shift/4 with two fingers to input the $ sign as a special character. The all to common “Ch@ngeMeN0w!” could be a very unpleasant onboarding password for a new employee or student. More critical accounts and remote access now typically require a secondary code from a smart phone app. Even with two hands I have struggled from time to time with responding quickly enough to Microsoft Authenticator validation requests.

Creating more inclusive, accessible work places and public services isn’t just a nice thing to do, it is actually a law in many countries within the world.  No one at Caffeinated Risk is a lawyer but researching did uncover an interesting clause in the Americans with Disabilities Act of 1990, which does specify the need for information technology systems to be accessible through multiple means.

“An accessible information technology system is one that can be operated in a variety of ways and does not rely on a single sense or ability of the user. This is important because a system that provides output only in visual format may not be accessible to people who are blind or have low vision, and a system that provides information only in audio format may not be accessible to people who are deaf or hard of hearing. Some individuals with disabilities may also need accessibility-related software or peripheral devices in order to use systems that comply with Section 508.”

Section 508 mentioned in the text above is part of the Rehabilitation Act of 1973 and deals specifically with electronic information and technology. While governments may have been thinking data stored in electronic systems needed to be protected back in 1973, in 2021 there isn’t an enterprise any where that isn’t faced with that same requirement. To that end, we have created numerous information security policies which will include at least one policy on access control with passwords and MFA likely to be the defined requirements. Most password policies with also include very specific requirements pertaining to complexity and password length, account lock out thresholds and so forth.

It may be true that certain password complexity requirements and MFA solutions do not consider accessibility the need for securing access to electronic systems does not disappear. Although alternatives mechanisms such as biometrics are now commercially viable since they are included on both mobile devices as well as modern laptops availability does not always equal usability.

In Praise of Design Thinking:

Without participating the 24 hour exercise I am not sure I would have fully considered the impact many of our security controls might have on accessibility. Password logins are only one challenge, secure areas may not have badge readers at a practical height for someone in a wheel chair, the same could be said for key pads and pin based door locks. A number of ideas are already coming to mind on how we could solve some of these challenges but it will take experts in software interfaces, operating systems, physical design and policy creators to resolve these issues.

A quick search for accessibility features will show commercial operating systems like Windows do have a number of options for allow disabled people to interact with the operating system itself.  I suspect the challenge will reside more with the applications running on top. For example,  how many building access applications could make a similar claim? Full disclosure, I have not looked, so it would be great to identify any such offerings in the market and review how they met the design challenges.

As a call to action for our readers please feel free to comment on these three points to ponder. Depending on interest this may lead to more of this research and perhaps even a podcast guest with deep domain knowledge in this area.

How many organizations have provisions within their current information security policies to permit user authentication via methods other than passwords and token/time based multifactor authentication?

How many organizations have deployed a technology stack within their company that would facilitate secure access to enterprise resources without the use of a password?

How many organizations are now including accessibility features as requirements in their technology investments?