Making security decisions based on verifiable facts



Making security decisions based on verifiable facts

Security decisions should be based on verifiable data - facts - rather than opinions. I’ve seen the trend of CISOs and many security operators being impeded by the lack of transparency into security data, jaded by product features and marketing fluff and limited by their ability to glean high quality, data-driven insights to inform decision making. This is a problem that GRIMM is working to solve.
Although there is no single solution to address this issue, CISOs and security operators can start by looking to integrate the right combination of technology and policy to enable better data collection and information sharing. It will force operators to hedge against “vendor trust” that so many incumbent technology providers have worked decades to achieve. And, most importantly, it will also require a marked shift away from outdated “best practices” ingrained into security practitioners from early in their careers.
Let’s use the concept of patch management to help illustrate what I mean: A few months ago there were a bunch of patches released to work around the design flaw known as Meltdown, which allows an attacker to read arbitrary memory. Initially, there were concerns about performance losses. A few months later we found that the patch from Microsoft for Windows 7 may have fixed meltdown, but it opened up a much larger security hole which was easier to exploit, allowed faster reading of arbitrary memory and also writing arbitrary memory. The upshot is that the security patch from January took people from a bad place to an even worse one. This isn’t an isolated incident, nor is it limited to Microsoft. Apple’s patch for Meltdown introduced an information leak which can be used to bypass kernel ASLR in iOS 11.2
Other examples of patching going awry would be MS14-045 which was released Aug 12, 2014, with a recommendation to uninstall it 3 days later, and then another replacement on the 27th. There was Apple’s update which allowed logging in as root with no password, and then the patch to fix that broke file sharing. There have been many reports of botched iOS updates which failed and required wiping the device, or in some cases actually rendered them permanently inoperable. The list goes on and on.
These examples are compelling arguments that the advice to install patches as soon as they come out might work a lot better in theory than in practice. Would it be better to let others be the lab rats and then see how it works out for them? The problem with that strategy is that unless it’s a performance issue or causes downtime such as broken file sharing, many of the issues aren’t discovered and publicly disclosed for weeks, months or even years. If security updates aren’t going to be applied right away, how long should one wait? To be clear, I think getting updates, especially security updates, quickly generally puts defenders in a better position more often than not, so I recommend testing the patch on a test system, and if it doesn’t break anything, roll it out to your users. If that’s not an option, make sure to back everything up right before applying the patch and, as always, test those backups!
However, this brings me to my larger point, people in the computer security industry give out advice based on whether or not it “feels right.” I would credit this as a huge component to the problem of snake oil salesmen being able to not only survive, but thrive in the industry. “Why should you do XYZ to keep your data safe? Because an expert said it was a good idea.” This is the bar for advice which has been set. It’s probably better than taking advice from a non-expert, or not taking any advice at all, but we don’t really know. There’s typically little to no public data supporting either side, and so people have to rely on anecdotal experiences, most of which need to be kept secret due to non-disclosure agreements.
This means when there is pressure to give the probability of an incident occurring, people just make things up. Any answer which is given is unlikely to be questioned, especially if it’s a low number. Furthermore, the person giving the answer will almost never be proven wrong. If they said there was a 5% chance of being hit by a Denial of Service attack in the next quarter and it does happen, then the answer is that the company was unlucky and they hit that 5%.
I want to change this. I want people to be able to make security decisions based on verifiable facts, rather than peoples’ opinions. If a percentage probability of a specific type of attack in a specific timeframe is given, the person giving it should be expected to show their work, and they should have no problem doing so. The Killerbeez framework we are developing, which will allow end users to better integrate multiple security testing tools onto a single platform for Windows machines, is a part of this but that only tests the software. We’re also going to start another Internal Research and Development (IRAD) project in the next few months to work on this problem in another way: to test the advice people are giving out be it in laws, regulations, security standards or elsewhere.
A prime example of the type of things we want to uncover is recommending that people be forced to change their passwords regularly. Once upon a time, there was some logic to back this up (it could cut off an attacker’s access, mitigate the threat of having a password brute forced, etc.), but it wasn’t based on an actual scientific study. When someone did a test, they found that it generally does more harm than good. Obviously we will not be looking at that specific issue since it’s already been researched and evaluated, but it provides a good example of largely unchallenged advice which was later reversed.
I’m aware that these things are complicated, and there’s typically not a one size fits all solution. The effects of many years of training people that “if you see a lock icon, you know you’re safe” were not lost on me. I realize that good advice today might be bad advice next year, and that advice is going to be context dependent. We will certainly keep these things in mind, but it shouldn’t stop us from tackling the problem. Information security is kind of a big deal these days, and unless things get better, legislators are going to continue to pass more laws and regulations to try to solve the problem. Having hard data would be really nice to improve the situation, or if it comes to passing more regulations, it’d be great in helping them decide what should be in those making those regulations reasonable (as in, actually attaining their stated goals and having minimal unintended consequences). There are people working on this, but they currently rely on best practices, which simply means it’s popular, not that it’s demonstrated to be effective.
It’s a hard problem, no doubt, but that’s what makes it interesting, and that’s exactly the kind of stuff we live for here at GRIMM.

Popular posts from this blog

New Old Bugs in the Linux Kernel

Automated Struct Identification with Ghidra

SOHO Device Exploitation