Monday, March 19, 2018

Some CrazyPants ideas for handling Kaspersky

These pants make more sense than some of the ideas posted for handling Kaspersky

So the benefit of being a nation-state, and the hegemon of course, is that you can pretty much do whatever you want. I refer, of course, to last week's LawFare post on policy options for Kaspersky Labs. The point of the piece, written by a respected and experienced policy person, Andrew Grotto, is that the US has many policy options when dealing with the risk Kaspersky and similar companies pose to US National Security. Complications include private ownership of critical infrastructure, the nature of cyberspace, and of course ongoing confusion as to the whether we have punitive or palliative aims in the first place. Another complication is how crazypants all the suggestions are.

He lists six options, the first two dealing with "Critical Infrastructure" where the Government has direct regulatory levers and Kaspersky has a zero percent market share already and always will. The third one is so insane, so utterly bonkers, that I laughed out-loud when reading it. It is this:

Ok, so keep in mind that "deemed export" is an area of considerable debate in the US Export Control community, and not something any other country does. While yes, applying the BIS Export Control rule in this case would immediately cause every company that does business in the United States to rush to uninstall KAV, this is not where the story would end.

Instead, we would have a deep philosophical discussion (i.e. Commerce Dept people being hauled in front of Congress) - because for sure not everyone who works at Azure, every backup provider in the world, or literally any software company, is a US Citizen. Because while Kaspersky has deep and broad covert access to the machines they are installed on, they are not the only ones.

We currently interpret these rules extremely laxly, for good reason.

The next suggestion in the piece is adding Kaspersky to the Entities list - essentially blacklisting them without giving a reason. Even ZTE did not get this treatment and while they paid a fine and are working their way back to good graces if possible, this was highly defensible. I mean, in these cases what about the thousands of US businesses that already have Kaspersky installed? The follow-on effects are massive and the piece ends up recommending against it, since the case against Kaspersky, while logical, is possibly not universally persuasive as a death sentence without further evidence?

Tool number 5 is the FTC doing legal claims against Kaspersky for "unfair or deceptive acts or practices" in particular, for pulling back to the cloud files that are innocuous. Kaspersky's easy defense is going to be "We don't know they are innocuous until we pull them back and analyze them, we make it clear this is what we do, we are hardly the only company to do so, for example see this article." I.E. the idea of FTC legal claims is not a good one and they know it.

The last "Policy Tool" is Treasury Sanctions. Of course we can do this but I assume we would have to blow some pretty specific intel sources and methods to do so.

Ok, so none of the ideas for policy toolkit options are workable, obviously. And as Andrew is hardly new at this, I personally would suggest that this piece came out as a message of some kind. I'm not sure WHAT the message is, or who it is for, but I end with this image to suggest that just because you CAN do something doesn't mean it is a good idea.

What happens if the Russians get false flag right?

There's a lot of interesting and unsolved policy work to be done in the Russian hack of the 2018 Olympics. Some things that stuck out at me was the use of Router techniques, their choice of targeting, and of course, the attempt at false flagging the operation to the North Koreans. I mean, it's always possible the North Koreans, not shabby at this themselves, rode in behind the Russians or sat next to Russian implants, and did their own operation.

There's a lot of ways for this sort of thing to go wrong. Imagine if there had been a simple bug in the router implants, which had caused them to become bricked? Or imagine if the Russians had gotten their technical false flag efforts perfect, and we did a positive attribution to North Korea, or could not properly attribute it at all, but still assumed it was North Korea?

Or what if instead of choosing North Korea, they had chosen Japan, China, or the US or her allies?

What if a more subtle false flag attempt smeared not just a country, but a particular individual, who was then charged criminally, which is the precedent we appear to want to set?

I don't think anyone in the policy community is confident that we have a way to handle any of these kinds of issues. We would rely, I assume, on our standard diplomatic process, which would be both slow, unused to the particulars of the cyber domain, and fraught with risks.

It's not that this issue has not been examined, as Allen points out, Herb Lin has talked about it. But we don't have even the glimmers of a policy solution. We have so much policy focus on vulnerability disclosure (driven by what Silicon Valley thinks) but I have seen nothing yet on "At what point will we admit to an operation publicly, and contribute to cleanup"?  or "How do we prove that an operation is not us or one of our allies to the public". In particular I think it is important that these issues are not Government to Government issues necessarily.


  • Herb Lin: LINK
  • Technical Watermarking of Implants Proposal: LINK

Tuesday, March 13, 2018

The UK Response to the Nerve Agent Attack

Not only do I think the UK should response with a cyber attack, I think they will do so in short order.

It's easy to underestimate the brits because they're constantly drinking tea and complaining about the lorries but the same team that will change an Al Qaeda magazine to cupcake recipes will turn your power off to make a point
The Russians have changed their tune entirely today, now asking for a "joint investigation" and not crowing about how the target was an MI6 spy and traitor to the motherland killed as a warning to other traitors (except on Russian TV). I don't think the Brits will buy it. As Matt Tait says in his Lawfare piece, this is the Brits talking at maximum volume, using terminology that gives them ample legal cover for an in-kind military response. Ashley Deeks further points out the subtleties of the international law terminology May chose to use and how it affects potential responses.

For something like this, sanctions go without saying, but I don't think that ends the toolbox. The US often also does indictments, but that's more message sending than impactful sometimes. The UK could pressure Russia on the ground in many places (by supporting Ukraine, perhaps?) but that takes a long time and is somewhat risky. Cyber is a much more attractive option for many reasons, which I will put below in an annoying bullet list.

  • Cyber is direct
  • Cyber can be made overt with a tweet or a sharply worded message
  • GCHQ (and her allies) are no doubt extremely well positioned within Russian infrastructure (as was pointed out in this documentary) so operational lag could be minimized or negligable
  • Cyber can be made to be discriminatory and proportional
  • Cyber can be reversible or not as desired
  • Sending this message through cyber provides a future deterrent and capabilities announcement
That answers why the Brits SHOULD use cyber for this. But we think they will, because they've sent that as a signal via the BBC and the Russians heard it loud and clear.

Tuesday, March 6, 2018

Why Hospitals are Valid Targets for Cyber

Tallinn 2.0 screenshot that demonstrates which subject lines are valid in spam and which are not. This page has my vote for "Most hilarious page in Tallinn 2.0". CYBER BOOBY TRAPS! It's this kind of thing that makes "Law by analogy" useless, in my opinion.

So often because CNE and CNA are really only a few keystrokes away ("rm -rf /", for example), people want to say "Hospitals" are not valid targets for CNE, or "power plants" are not valid targets for CNE, or any number of things they've labeled as critical for various purposes.

But the reason you hack a hospital is not to booby trap an MRI machine, but because massive databases of ground truth are extremely valuable. If I have the list of everyone born in Tehran's hospitals for the last fifty years, and they try to run an intelligence officer with a fake name and legend through Immigration, it's going to stand out like a sore thumb.

The same thing is true with hacking United. Not only are the records in and out of Dulles airport extremely valuable for finding people who have worked with the local federal contractors but doing large scale analysis of traffic amounts lets you guesstimate at budget levels and even figure out covert program subjects. People look at OPM and they see only the first order of approximation of the value of that kind of large database. Who cares about the clearance info if you can derive greater things from it?

The Bumble and Tinder databases would be just as useful. If you are chatting with a girl overseas, and she says she doesn't have a Bumble/Tinder account, and you're in the national security field, you're straight up talking to an intelligence officer. And it's hard to fake a profile with a normal size list of matches and conversations... 

And, of course, hacking critical infrastructure and associated Things of the Internet allows for MASINT, even on completely civilian infrastructure. People always underestimate MASINT for some reason. It's not sexy to count things over long periods of time, I guess.

Also, it's a sort of hacker truism that eventually all networks are connected so sometimes you hack things that seem nonsensical to probe for ways into networks that are otherwise heavily monitored.

I highly recommend this book. Sociology is turning into a real science right before our is intelligence.
SIGINT was the original big data. But deep down all intelligence is about making accurate predictions. Getting these large databases allows for predictions at a level that surprises even seasoned intelligence people. Hopefully this blog explains why so many cyber "norms" on targeting run into the sand when they meet reality.

Wednesday, February 28, 2018

A non-debate on the EU VEP process

VEPfest EU! Watch the whole show here

I know not many people watched the VEPFest EU show yesterday, but I wanted to summarize it. First, I want to comment on the oddity that Mozilla is for some reason leading the charge on this issue for Microsoft and Google and the other big tech companies. Of course, this was not a "debate" or even a real discussion. It was a love-in for the idea of a platonic ideal of the Vulnerability Equities Process, viewed without the actual subtleties or complexities other than in passing mention.

To that end, it did not have opposing views of any kind. This is a pretty common kind of panel setup for these sorts of organizations on these issues and it's not surprising. Obviously Mozilla would prefer a VEP enshrined in EU law, since they have had no success making this happen in the US. Likewise, he really hates the part of the VEP that says "Yes we obey contract law when buying capabilities from outside vendors".

It's impossible to predict the direction of Europe since this issue is a pet project of one of their politicians but an EU-wide VEP runs into serious conflict with reality (i.e. not all EU nations have integrated their defense/intelligence capabilities) and a per-country VEP would err on "WE NEED TO BUILD OUR OFFENSIVE PROGRAMS STAT!" Unless the 5eyes are going to donate tons of access and capability to our EU partners, they're going to be focusing hard on the "equities" issue of catching up in this space for the foreseeable future.

I was of course annoyed, as you should be, by Ari Schwartz deciding to make up random research about things he knows nothing about. At 1:45:00 into the program he claims that bug classes have been experiencing more parallel discovery than before.

To be completely clear, there has been no published research in "Bug class collision", which would be extremely rare, like studying supernovae collision. Typically "bug class spectrum analysis" is useful to do attribution from a meta-technical standpoint, which is the subject of a completely different blog post on how toolchain-timelines are fingerprints, specifically because new bug classes are among the most protected and treasured research results.

There has been some work on bug collision, but at very preliminary stages due to the lack of data (and money for policy researchers). Specifically:

  • Katie's RSA paper (Modeled) - PDF  
  • Lily's RAND paper (small data set) - PDF
  • Trey/Bruce's paper (discredited/faked data set) - PDF
There's also quite a lot of internal anecdotal evidence and opinions at any of the larger research/pen-test/offensive shops. But nothing about BUG CLASSES, as Ari claims, and definitely nothing about a delta over time or any root causes for anything like that. Bug classes don't even have a standard definition anyone could agree on.

Anecdotally though, bug collisions are rare, full stop. You cannot secure the internet by giving your 0day to Mozilla, is what every expert knows, even if you are the USG and have a wide net. Literally Google Project Zero had Natalie do a FULL AND COMPREHENSIVE review on Flash vulnerabilities and almost no difference in adversary collections was made, despite huge efforts and mitigation work, and automated fuzzer work, etc.

But let's revisit: Ari Schwartz literally sat on stage and MADE UP research results which don't exist to fit his own political view. Who does that remind you of?

Monday, February 26, 2018

What is a blockchain for and how does it fit into cyber strategy?

The best answer to what a blockchain is is here: A Letter to Jamie Dimon.

But the best answer to what it is for, is of course a chapter of Cryptonomicon, which can be read online right here.

I will paste a sliver of it below:
A: That money is not worth having if you can't spend it. That certain people have a lot of money that they badly want to spend. And that if we can give them a way to spend it, through the Crypt, that these people will be very happy, and conversely that if we screw up they will be very sad, and that whether they are happy or sad they will be eager to share these emotions with us, the shareholders and management team of Epiphyte Corp.
I think one thing you see a lot (i.e. in personal conversations with Thomas Rid, or when reading Rob Knake's latest CFR piece is a reflexive confusion as to why every technologist they seem to talk to holds what they consider extremely libertarian views.

My opinion on this is that it's a generational difference, and that the technologists at the forefront of internet technology simply reached that future earlier than their peers who went into policy. In other words, it's a facilely obvious thing to say that the Internet was built (and continues to be built) almost entirely on porn.

But beyond that, the areas where Westphalian Governments are not in line with people's desires create massive vacuums of opportunity. To wit: The global trade in illegal drugs is typically assumed to be 1% of GLOBAL GDP. But that money is not worth having if you can't spend it.

And credit card processors, built on the idea of having a secret number that you hand out to everyone you want to do business with, are a primary way governments lock down "illicit" trade. On a trip last week to Argentina, where Uber is outlawed but ubiquitous, I found you cannot use American Express or local credit cards, but a US Mastercard will work. And if you're a local, they suggest you can use bitcoin to buy a particular kind of cash-card which will work.

The same thing is true in the States when it comes to things that are completely legal, but unfavored, for example the popular FetLife website, which recently self-censored to avoid being blackballed by Visa.

In other words, you cannot look at the valuations of Bit-Currencies and not see them as a bet against the monopoly of Wesphalian states on currency and transactions that has existed since Newtonian times. What else does this let you predict?

Friday, February 23, 2018

Blockchain Export Control

Wanting to withdraw from the Wassenaar Arrangement is totally sane policy position and hopefully this blogpost will help explain why.

Mara would be better off rewriting Wassenaar's regulatory language as a Solidity smart contract on top of Ethereum. They share (aside from the obtuseness of the language) several key features. In particular, they can be described as one way transaction streams.

I know that supporters of the WA, which requires 41 nations to all agree on a change before it happens, think that the current path of export control is hunky dory and well adjusted to technical realities. But even in areas that ARE NOT CYBER you only have to sit through a couple public ISTAC meetings before you see that while it is easy to CREATE regulations, it is nearly impossible to revise or erase regulations. This is why we have regulations on board that appear to apply to technology from the 50s, which one day is what people will look at all Ethereum programs as.

For technologies that change slowly, this is less of an issue. But you cannot predict the change rates in technological development before you decide to regulate something with export controls. Nor is any form of return on investment function for your regulation specified, so unused and ill-planned regulatory captures just hang around on the Wassenaar blockchain forever.

As a concrete example, let's take a look at Joseph Cox's spreadsheets, wherein he FOIA'd various UK Govt license filing information.

The 5A1J ("internet surveillance system") spreadsheet, here, specifies two real exports, one of what appears to be ETIGroup's EVIDENT system to the UAE and the other which appears to be BEA Detica to Singapore, both of which were approved.

Now I personally have spent maybe fifty hours this year trying to untangle the stunningly bad 5a1j language, which uses technically incorrect terminology, arrived vastly out of date (i.e. applies to any next gen firewall/breach detection system) and has no clear performance characteristics. All of this for something that in the UK resulted in TWO SALES, which if they had been blocked would just have resulted in the host governments putting something together from off the shelf components??!?!

Taking a look at his 4D4 "intrusion software" spreadsheet, here, you get similar results:

  • A sale to the United States
  • A sale of a blanket license for "Basically anything penetration testing related" to Jordan, Philippines, Indonesia, Kuwait, Egypt, Qatar, Oman, Saudi Arabia, Singapore and Dubai.
  • A sale to Bahrain
  • A sale to Dubai (but just for equipment "related"?)

Even if those are the most important four export control licenses ever issued I think the time anyone has spent on implementing or talking about these regulations is EXACTLY LIKE the entire rainforest fed into the blazing fire every day that is Ethereum's attempt to emulate the world's slowest Raspberry Pi running Java.

There's a weird conception among "civil society" experts that export control is useful whenever any technology can have negative uses. That's a misunderstanding of how Dual-Use works that is not shared even among the most optimistic of the specialists I've talked to in this area.

In addition, NOT issuing those licenses results in four possibilities none of which is "Country does not get said capabilities":

  1. The country develops it internally by gluing off the shelf components together (because there is basically no barrier to entry in these markets - keep in mind HackingTeam was not...a big team)
  2. The country buys it from China 
  3. The country buys it from a Wassenaar country with a different and looser implementation of the regulation. (Unlike Ethereum, every WA implementation is different, which is super fun. For example, the US has this neat concept called "Deemed export" which means you need a license if you give the H1B employee next to you something that is controlled.)
  4. The country buys it from a reseller in a country with less baggage using a cover company and then emails it to themselves using the very complicated export control avoidance tool "Outlook Express".

But for FOUR LICENSES seriously who cares? This whole thing is like having a BBQ on the side of the space shuttle. With enough expended energy you can sure toast a few marshmallows, but it's not going to be the valuable memory building Boy Scout experience for your kids that maybe you were hoping for.

And I'll tell you why I personally care and it's because all the people who should be working on policies that "make sure we don't lose an AI war to China" are instead sitting in Commerce Dept rooms defending their companies from the deadly serious rear naked choke that is Wassenaar! And it's not just cyber, it's everything.

If you want to make a number for your controlled Frommy Widget in the WA go from 4Mhz to 6Mhz then it's a simple three year process of arguing about it with various agencies and then it goes through the  system and by the time the language has changed it's already out of date, much like every valuation of your BitCoin you've ever gotten. So now you're spending your precious cycles arguing for a change from 6Mhz to 8Mhz in the very definition of a Sisyphean process.

The end result is that instead of exporting hardware around the world, we export jobs as companies set up overseas in the VERY INDUSTRIES WE CONSIDER MOST SENSITIVE AND IMPORTANT. This is a hugely real issue that should be part of the ROI discussion around any of these regulations but never is for some reason.

This could be maybe fixable by implementing a mandatory nonrenewable 5 year sunset to all Wassenaar regulations. But to do this, the US (and the international community) basically needs to hard-fork the whole idea of technological export control, which is something we should do for many reasons. A more realistic option may be to pull completely out of WA and re-implement the parts that make sense with bilateral agreements.

Another issue is that the actual technical understanding cycles spent on implementing new regulations are lower than they should be, for a process that is only a one-way diode. I.E. you need people full time on every one of the new and old issues but by definition the technical experts on these issues work on them part time. Basically you want people doing a TDY looking at all the regulations from a technical perspective, and we don't have that as a community. We could solve that by giving grants to various companies to fund it, or by hiring it within the Commerce department (and various related international equivalents). Think the DARPA PM program, but for export control experts.

But that's hugely expensive, and as pointed out, it's questionable if any of this makes any more sense to invest in than a virtual blockchain cat!