The problem with opening a back door.

Further disclosures from the Edward Snowden documents have indicated that the NSA has been running a program which has been trying to weaken public standards, including those coming out of NIST, as well as endevouring to get private companies to insert back doors into either their hardware or software to make it easier for the NSA, and its friends, to exploit.

There are a few problems with this, other than the obvious one.

Firstly, it is all well and good to put in a back door and think no-one else will find it. We can have a brief look at the history of cryptography in respect to World War II to see how well the “I’m smarter than the other guy” philosphy will get you. Take the German Enigma machine for example, or perhaps the Japanese Purple code, or even the US diplomatic “Black” code, which while not broken by cryptanalytic means, did get lifted by the Italian intelligence services in 1941, resulting in the US representative in the British desert forces acting as a handy source of news to the German forces for a large part of the campaign. It seems clear, as the powers of mathematical ability and careful observation combined with trial and error are not things that are cultural monopolies by any country on the planet, other organisations would definitely be on this game. Worse, because we’re dealing with communications rather than physical security, it is clear that it would be desirable to write tools take advantage of the back doors. As Bruce Schneier is fond of pointing out, you just need a copy of the tool, and once you have that, even a 12 year old will be well on their way, no special abilities required.

So let us imagine other organisations, either foreign or criminal, work out what’s going on, or are sold the required tools. What happens then? The second problem. Normally the case in issues of national security is that an intelligence agency is only trying to protect things that it regards as mattering. In this respect, when dealing with state secrets they’ll be taking precautions and they will also normally be keeping an eye on what is going on in case something leaks out (i.e. they will assume that the precautions may not work). The problem here is that the back doored software and hardware is being passed onto to other organisations both American and foreign who do not know they’re vunerable and even if they did would have little chance of dealing with it. They will be more likely to be spending their resources on concentrating on staying in business rather than trying to launch covert operations on their competition (which in some cases may be not only foreign but state owned). The person getting compromised as a result of the program does not belong to the same organisation as the person responsible for the compromise being possible. Consequently, if I’m a manufacturer of the X widget hoping to corner the market for a few years by innovation, it is unlikely the NSA will either care, or notice, when someone else starts making an X widget using my stolen intellectual property. It’s not the NSA’s problem, it’s mine. As a case in point, the compromise of the American diplomatic code was discovered by the British (who cared) not by the US (who weren’t committed to the war then). It was the British looking for a leak, not the US, and the British were just lucky they had a line of communication to use. As the maker of the X widget, even if I work out someone is using a back door, I don’t have anyone to talk to, and even if I work it out for myself the people responsible will deny it.

Which raises another interesting point, if I was a foreign, or criminal, agency with designs on US economic activity, the “compromise programs” would be the ones I’d be trying to infiltrate. I’d send my best people, and the only instructions I would have to give them is “Do your job really well”. So here’s problem number three, let’s imagine I’m running a program to compromise security standards and hardware, how would I tell between someone who is working for me suggesting a back door, and someone who is working for “someone else” suggesting a back door. I think I would have a lot of trouble with that one, not least because I probably have no way of finding out that it was a “working for someone else” that made the suggestion as I am not the target and it won’t be me who discovers he’s got a problem when the back door is taken advantage of. After all the “working for someone else” will probably be a model employee and I will be using the back door myself. It’s not like the situation with the leaking of Atomic Bomb information at the end of the World War II, there will be not a big explosion, a loud noise, and a blinding flash of light to tell me there is something wrong. The first indication I, as the government agency, will have that I’m in trouble is that after foreign organisations and criminals have spent years taking advantage of my country’s ability to innovate as well pre-empt any trade negotiations and the like through data mining using back doors, government tax receipts will eventually collapse to the point where my budget gets cut and my society collapses. Not a glamorous ending, and certainly not the level of harm to my country and its population that I, as a government agency, should be trying to promote.

The history books tell us that when DES was first put forward the NSA showed it was a real asset to the US (and a few others) by improving the standard in order to protect it from cryptanalytic techniques that at the time were unknown in civilian circles. Times do appear to have changed, and sadly the word “asset” appears to have dropped off the last two letters in it. I hope we see a change in this soon.

Leave a Reply

Your email address will not be published. Required fields are marked *


1 × 7 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>