Wednesday, December 6, 2017

A Better Norm for Enforced Backdoors

This is the kind of joke you only can see in a Wonder Woman comic for what should be obvious reasons.

So various people in the government  think they can force a private American company to implement a backdoor in their product without a warrant. But they also say they haven't done this yet.

Part of the reason why is that doing classified work in non-classified environments comes with risk - i.e. part of the reason classification systems are effective is that people in the system have signed off on the idea. Threats of prosecution only go so far really as a preventative measure against leaks (as we are now hyper-aware?)

To wit, the other major reason is that as a matter of policy, forced backdoors are terrible in a way that is visibly obvious to anyone and everyone who has looked at them. The reason is that we want to claim a "Public Private Partnership" and that's a community wide thing, and this is a tiny community.

What everyone is going to expect with a public-private partnership is simple: Shared Risk. If you ask the Government if they're going to insure a company for the potential financial harm of any kind of operation, including a backdoor, they'll say "hell no!". But then why would they expect a corporation to go along with it? These sorts of covert operations are essentially financial hacks that tax corporations for governments not wanting to pay the up-front costs of doing R&D on offensive methods, and the companies know it.

The backdoors problem is the kind of equities issue that makes the VEP look like the tiny peanuts it is and it's one with an established norm that the US Government enforces, unlike almost every other area of cyber. Huawei, Kaspersky, and ZTE have all paid the price for being used by their host governments (allegedly). Look at what Kaspersky and Microsoft are saying when faced with this issue: "If asked, we will MOVE OUR ENTIRE COMPANY to another Nation State".

In other words, whoever is telling newspapers that enforced backdoors are even on the table is being highly irresponsible or doesn't understand the equities at stake.

Tuesday, December 5, 2017

The proxy problem to VEP

Ok, so my opinion is the VEP should set very wide and broad guidelines and never try to deal with the specifics of any vulnerability. To be fair, my opinion is that it can ONLY do this, or else it is fooling itself because the workload involved in the current description of any VEP is really really high.

One point of data we have is the Chinese Vulnerability reporting team apparently takes a long time on certain bugs. My previous analysis was that they used to take bugs they knew were blown and then give them to various Chinese low-end actors to blast all over the Internet as their way of informally muddying the waters (and protecting their own ecosystem). But a more modern analysis indicates a formal and centralized process perhaps.

So here's what I want to say, as a thought experiment: Many parts of the VEP problem completely map homomorphically to finding a vulnerability and then asking yourself if it is exploitable.

For example, with the DirtyCow vulnerability. Is it at all exploitable? Does it affect Android? How far back does the vulnerability go? Does it affect GRSecced systems? What about RHEL? What about stock systems but with uncommon configurations. What about systems with low memory or systems with TONS OF MEMORY. What about systems under heavy load? What about future kernels - is this a bug likely to still exist in a year?

Trees have roots and exploits get burned, and there's a strained analogy in here somewhere. :)

The list of questions is endless, and each question requires an experienced Linux kernel exploitation team at least a day to answer. And that's just one bug. Imagine you had a hundred bugs, or a thousand bugs, every year, and you had to answer these questions. Where is this giant team of engineers that instead of writing more kernel exploits is answering all these questions for the VEP?

Every team who has ever had an 0day has seen an advisory come out, and said "Oh, that's our bug" and then when the patch came out, you realized that was NOT your bug at all, just another bug that looked very similar and was even maybe in the same function. Or you've seen patches come out and your exploit stopped working and you thought "I'm patched out" but the underlying root cause was never handled or was handled improperly.

We used to make a game out of second guessing Microsoft's "Exploitability" indexes. "Oh, that's not exploitable? KOSTYA GO PROVE THEM WRONG!"

In other words: I worry about workload a lot with any of these processes that require high levels of technical precision at the upper reaches of government.

Tuesday, November 28, 2017

Matt Tait is Keynoting INFILTRATE 2018!

So I know INFILTRATE is not aimed at the security policy crowd, but Matt Tait, formerly of GCHQ and Google Project Zero, and now a senior fellow at the Robert S. Strauss Center for International Security and Law at the University of Texas at Austin is going to give a keynote this year that I think the audience of this blog will want to attend.

You may, of course, know him only as @pwnallthethings or because he was involved in our running Russian Election drama, but I've honestly never met someone who had both the technical chops that Matt Tait has, and the ability to absorb the legal and policy area, communicate it, and project how it will fold in spacetime in the future.

I spent some time last week talking to him about his speech, and I already know it's good. :)

So if you are not already registered, then you should!

Tuesday, October 31, 2017

The Year of Transparency

I'm just going to quote a small section here of Rob Graham's blog on Kaspersky, ignoring all the stuff where he calls for more evidence, like everyone does, because it's boring and irrelevant.
I believe Kaspersky is guilty, that the company and Eugene himself, works directly with Russian intelligence.

That's because on a personal basis, people in government have given me specific, credible stories -- the sort of thing they should be making public. And these stories are wholly unrelated to stories that have been made public so far.

There's a lot to read from the Kaspersky press release on the subject of their internal inquiry. But the main thing to read from it is that the US information security community has already had a master class on Russian information operations and yet the Russians still think we will fall for it.

If any of you have a middle schooler, you know that they will gradually up the veracity of their lies when they get caught skipping school. "I was on time"->"I was a bit late"->"I missed class because I was sick"->"I just felt like playing the new Overwatch map so I didn't go to school."

In the Kaspersky case we are led to believe that Eugene was completely caught out by these accusations, and at the same time that someone in 2014 brought to him a zip file full of unreleased source code for NSA tools which he immediately ordered deleted without even looking at it and without asking any detailed questions about the matter.

This is what all parents call: Bullshit.

The US likely has multiple kinds of evidence on KasperskyAV:

  • SIGINT from the Israelis which has KEYLOGS AND SCREENSHOTS of bad things happening INSIDE KASPERSKY HQ (and almost certainly camera video/audio which are not listed in the Kaspersky report but Duqu 2.0 did have a plugin architecture and no modern implant goes without these features)
  • Telemetry from various honeypots set up for Kaspersky analysis. These would be used to demonstrate not just that Kaspersky was "pulling files into the cloud" but HOW and WHEN and using what sorts of signatures. There is a difference to how an operator pulls files versus an automated system, to say the least. What I would have done is feed the Russians intel with codewords from a compromised source and then watched to see if any of those codewords ended up in silent signatures.
  • HUMINT, which is never ever mentioned anywhere in any public documents but you have to assume the CIA was not just sitting around in coffee bars wearing tweed jackets all this time wondering what was up with this Kaspersky thing they keep reading about. Needless to say the US does not go to the lengths it has gone to without at least asking questions of its HUMINT team?
I know the Kaspersky people think I have something against them, which I do not, or that I have inside info, which I also do not. But the tea leaves here literally spell the hilarity out in Cyrillic, which I can, in fact, read. 

Wednesday, October 11, 2017

The Empire Strikes Back

XKCD needs to calculate the strength of those knee joints in a comic for us.

It's fascinating how much of the community wants to be Mulder when it comes to Kaspersky's claims of innocence. WE WANT TO BELIEVE. And yet, the Government has not given out "proof" that Kaspersky is, in fact, what they claim it is. But they've signaled in literally every way possible what they have in terms of evidence, without showing the evidence itself. This morning Kaspersky retweeted a press release from the BSI which when translated, does not exonerate him, so much as just ask the USG  for a briefing, which I'm sure they will get.

Likewise, where there is one intelligence operation, there are no doubt more. Kaspersky also runs Threatpost and a popular security conference. Were those leveraged by Russian intelligence as well? What other shoes are left to drop?

Reports like this rewrite our community's history: Are all AV companies corrupted by their host governments? Is this why Apple refused to allow AV software on the iPhone, because they saw the risk ahead of time and wanted to sell to a global market?

If I was Russian intelligence leveraging KAV I would make it known that if you put a bitcoin wallet on your desktop, and then also bring tools and documents from TAO home to "work from home" and you happen to have KAV installed, your bitcoin wallet would get donations. No communication needed, no risky contacts with shady Russian consulate officials. Nothing convictable as espionage in a court of law. Maybe I would mention this at the bar at Kaspersky SAS in Cancun.

But the questions cut both ways: Is the USG going to say they would never ask an American AV company to do this? The international norms process is a trainwreck and the one thing they hang their hats on is "We've agreed to not attack critical infrastructure" but defining what the trusted computing base of the Internet as a whole is they left as a problem for the "techies".

We see now the limitations of this approach to cyber diplomacy, and the price.

Saturday, September 16, 2017

The Warrant Cases are Pyrrhic Victories

The essential question in Trusted Computing has always been "Trusted FROM WHOM?" and the answer right now is from the Government.

Trusted Computing is Complex

So a while back I had two friends who I hung out with all the time and because we knew almost no women after we worked a full day at the Fort we would go back to their house and try to code an MP3 decoder or work on smart card security (free porn!) or any number of random things.

One of my friends, Brandon Baker, went off to Microsoft and ended up building the Hyper-V kernel and worked on this little thing called Palladium, which then got renamed the Next Generation Trusted Computing Base and because of various political pressures relating to creating an entirely new security structure based on hardware PKI was then buried.

But it didn't die - it has been slowly gaining strength and being re-incarnated in various forms, and one of those forms is Azure Confidential Computing.

People have a hard time grasping Palladium because without all the pieces, it is always broken and makes no sense, and most of those pieces are in poorly documented hardware. But the basic idea is: What if Microsoft Windows could run a GPG program that it could not introspect or change in any way, such that your GPG secret key was truly secret, even from the OS, even if a kernel rootkit was installed?

Of course, the initial concept for Palladium was mostly oriented towards DRM, in the sense of having a media player that could remotely authenticate itself to a website and a secured keyboard/screen/speaker such that you couldn't steal the media. This generated little interest in the marketplace and the costs for implementation were enormous, hence the failure to launch.

"Winning" on warrants. The very definition of Pyrrhic Victories.

Law Subsumed by Strategy

There's a sect among the Law Enforcement, national security, and legal community that looks upon Microsoft and Google's court cases on extra-territorial warrant responses as an impingement of the natural rights of the US Nation State.

It's no surprise that the legal arguments are disjointed from both sides. Effectively the US position is that the government should be able to collect whatever data it wants from Google or Microsoft, because the data is accessible from the US, and because they want it. And Google and Microsoft have stored that data on overseas servers for many reasons but also because their customers, both international and domestic think the US State no longer has that natural right, that it is as primitive as Prima Nocte. And in addition their employees think the US has failed to go to bat on these issues for Google/Microsoft/etc in China and the EU.  This isn't necessarily true, but it is true that the USG has treated the populations that make up the technology elites as if their opinions are not relevant to the discussion.

Law is not a Trump Card

The problem with making the US Government the primary foe in every technology company's threat model is they can very quickly adapt to new laws by building systems which they cannot introspect, which is what Azure Confidential Computation is. But that's just the beginning. Half their teams come from the NSA and CIA technology arms. They know how to cause huge amounts of pain to our system while staying within regulations and laws, and they have buy in from the very tops of their organizations.

This was all preventable. If we'd had decent people in the executive team killing the Apple lawsuit last year, and finding some way to come to an agreement and end the crypto war, we could have prevented Going Dark from being a primary goal of all of the biggest companies (I.E. even at Financials). We needed to be able to negotiate with them in good faith to maintain a balance of "The Golden Age of Metadata" with what they and their customers wanted.

We didn't have anyone who could do that. As in so many pieces of the cyber-government space, we may have missed our window to prevent the next string in the international order from unraveling.

Thursday, September 7, 2017

Opaque cyber deterrence efforts


Pakistan's Nuclear Policy: A Minimum Credible Deterrence

By Zafar Khan
Figuring out what cyber operations can and can't deter is most similar to figuring out what percentage of your advertising budget you are wasting. That is: you know 90% of your cyber deterrence isn't working, you just don't know which 90%.

That said, so much more of cyber deterrence is based around private companies than we are used to working with in international relations. Kaspersky may or may not have been used for ongoing Russian operations, and the deterrent effect of banning them from the US market will have a long reach. This mix is complicated and multi-faceted. Some of the hackers that ran China's APT1 effort now work for US Anti-Virus companies.

Modern thinkers around deterrence policy often look at only declared overt deterrence, of the type North Korea is currently using. But covert deterrence is equally powerful and useful and much more applicable to offensive cyber operations where there is no like-for-like comparison between targets or operational capability.

But cyber does have deterrent effects - knowing that someone can out your covert operatives by analyzing the OPM and Delta Airline databases can deter a nation-state from operating in certain ways.

The question is whether non-nation-state actors also have opaque cyber deterrence abilities and how to model these effects as part of a larger national security strategy - for example, via Google's Project Zero. And it's possible that the majority of cyber deterrence will at least pretend to be non-nation-state efforts, such as ShadowBrokers.

Technically, deterrence often means the ability to rapidly respond and neutralize offensive cyber tools. Modern technology such as endpoint monitoring, or country-wide network filtering, can provide an effective deterrence effort when provided with input from SIGINT or HUMINT sources that effectively neutralizes potential offensive efforts by our adversaries.