The problem with opening a back door.

Further disclosures from the Edward Snowden documents have indicated that the NSA has been running a program which has been trying to weaken public standards, including those coming out of NIST, as well as endevouring to get private companies to insert back doors into either their hardware or software to make it easier for the NSA, and its friends, to exploit.

There are a few problems with this, other than the obvious one.

Firstly, it is all well and good to put in a back door and think no-one else will find it. We can have a brief look at the history of cryptography in respect to World War II to see how well the “I’m smarter than the other guy” philosphy will get you. Take the German Enigma machine for example, or perhaps the Japanese Purple code, or even the US diplomatic “Black” code, which while not broken by cryptanalytic means, did get lifted by the Italian intelligence services in 1941, resulting in the US representative in the British desert forces acting as a handy source of news to the German forces for a large part of the campaign. It seems clear, as the powers of mathematical ability and careful observation combined with trial and error are not things that are cultural monopolies by any country on the planet, other organisations would definitely be on this game. Worse, because we’re dealing with communications rather than physical security, it is clear that it would be desirable to write tools take advantage of the back doors. As Bruce Schneier is fond of pointing out, you just need a copy of the tool, and once you have that, even a 12 year old will be well on their way, no special abilities required.

So let us imagine other organisations, either foreign or criminal, work out what’s going on, or are sold the required tools. What happens then? The second problem. Normally the case in issues of national security is that an intelligence agency is only trying to protect things that it regards as mattering. In this respect, when dealing with state secrets they’ll be taking precautions and they will also normally be keeping an eye on what is going on in case something leaks out (i.e. they will assume that the precautions may not work). The problem here is that the back doored software and hardware is being passed onto to other organisations both American and foreign who do not know they’re vunerable and even if they did would have little chance of dealing with it. They will be more likely to be spending their resources on concentrating on staying in business rather than trying to launch covert operations on their competition (which in some cases may be not only foreign but state owned). The person getting compromised as a result of the program does not belong to the same organisation as the person responsible for the compromise being possible. Consequently, if I’m a manufacturer of the X widget hoping to corner the market for a few years by innovation, it is unlikely the NSA will either care, or notice, when someone else starts making an X widget using my stolen intellectual property. It’s not the NSA’s problem, it’s mine. As a case in point, the compromise of the American diplomatic code was discovered by the British (who cared) not by the US (who weren’t committed to the war then). It was the British looking for a leak, not the US, and the British were just lucky they had a line of communication to use. As the maker of the X widget, even if I work out someone is using a back door, I don’t have anyone to talk to, and even if I work it out for myself the people responsible will deny it.

Which raises another interesting point, if I was a foreign, or criminal, agency with designs on US economic activity, the “compromise programs” would be the ones I’d be trying to infiltrate. I’d send my best people, and the only instructions I would have to give them is “Do your job really well”. So here’s problem number three, let’s imagine I’m running a program to compromise security standards and hardware, how would I tell between someone who is working for me suggesting a back door, and someone who is working for “someone else” suggesting a back door. I think I would have a lot of trouble with that one, not least because I probably have no way of finding out that it was a “working for someone else” that made the suggestion as I am not the target and it won’t be me who discovers he’s got a problem when the back door is taken advantage of. After all the “working for someone else” will probably be a model employee and I will be using the back door myself. It’s not like the situation with the leaking of Atomic Bomb information at the end of the World War II, there will be not a big explosion, a loud noise, and a blinding flash of light to tell me there is something wrong. The first indication I, as the government agency, will have that I’m in trouble is that after foreign organisations and criminals have spent years taking advantage of my country’s ability to innovate as well pre-empt any trade negotiations and the like through data mining using back doors, government tax receipts will eventually collapse to the point where my budget gets cut and my society collapses. Not a glamorous ending, and certainly not the level of harm to my country and its population that I, as a government agency, should be trying to promote.

The history books tell us that when DES was first put forward the NSA showed it was a real asset to the US (and a few others) by improving the standard in order to protect it from cryptanalytic techniques that at the time were unknown in civilian circles. Times do appear to have changed, and sadly the word “asset” appears to have dropped off the last two letters in it. I hope we see a change in this soon.

It’s not really about encryption

There has been a lot of “excitement” lately about government surveillance, encryption, and secure message services. This has led to one provider, Lavabit, shutting up shop rather than agreeing to do something they wanted no part in, and another provider, Silent Circle closing down their Silent Mail service in order to avoid been asked to do something they clearly want no part of either.

While I think both Lavabit and Silent Circle are to be commended for their stands, and I’m hoping the fact things are getting to this extreme is giving some others cause for reflection, as usual the popular discussion appears to be turning into how the individual use of encryption can help prevent abuse in a “surveillance state”. This is a bit of a problem as use of encryption, or certainly public key encryption, provides you with no long term protection at all, and not only do some people not seem to understand this, but as usual it’s spawning a new generation of snake oil with claims about products based on public key encryption that are simply not true.

As anyone who knows me will hopefully testify, and my work on Bouncy Castle should show, I do believe  quite passionately in the “civilian population” (in which I include myself) having access to good quality encryption software, but I also recognise the limits of its usefulness. For me encryption is primarily a tool to support trade and commerce, certainly a good defence against corruption and criminal activity but not really as helpful as it might seem in keeping “the Feds” (or any other similarly well resourced and determined party) out of my business. The only thing that defends my individual freedoms and stops a government agency from kicking down my door at 2.00 AM and shooting me is the rule of law.

It is important to understand this. The letters PGP in the software originally created by Philip Zimmermann stand for “Pretty Good Privacy”. I think it’s important to appreciate that “Pretty Good” is quite different from “Indefinite”. In terms of claims made, Phil certainly delivered, it’s pretty good, and if you keep upping your key sizes appropriately and manage your keys carefully you’ll probably still get at least 2 to 5 years before anyone can actually start reading your messages without stealing your keys, or the keys of one your recipients. There is a simple reason for this: as we have advances in mathematics and computing, the ability to recover the secret values associated with algorithms like RSA improves so all public key algorithms really give you is a window of privacy, not the ability to forever hide what you think. So yes, good for medium term planning, great for reducing fraud related to packet sniffing credit cards, but next to useless for protecting you from “the state” (and if you want to fully understand what that means, there is already at least one example showing a few years to a government agency with the right incentives is nothing).

So where does that leave us today? Well if you think your country is on the slippery slope to becoming a surveillance state (and I guess, with all due respect to other people’s good intentions, it’s pretty clear a few countries are…) start lobbying to get the laws changed, improve accountability, make sure you have a diverse and free press employing good investigative journalists, keep an eye out for your neighbours, and while you are carrying out the debate try and keep in mind a lot of people on the “other side” are genuinely trying to do the right thing (so may be a little perplexed and confused that the rest of us are so cross…). Don’t kid yourself that because you’re using cryptography you’re in some way making yourself safer, or apart, from what might happen to everyone around you who is not.

The debate going around us is really about the kind of society we all want to live in, government agencies included. After all, if your country makes the final transition from surveillance state to police state, the only thing the heavy use of encryption is likely to do for you is make you a target. Most likely not the position you intended to be in, and nothing about your private keys will protect you then.