Backdoors in encryption products

padlock-security-protection-hacking-540x334After the recent terrorist attacks the governments are again pushing for more surveillance and the old debate on the necessity of the backdoors in encryption software raises its ugly head again. Leaving the surveillance question aside, let’s see, what does it mean to introduce backdoors to programs and how they can be harmful, especially when we are talking security and encryption?

Generally, a backdoor is an additional interface to a program that is not documented, its existence is kept secret and used for purposes other than the main function of the program. Quite often, a backdoor is simply a testing interface that the developers use to run special commands and perform tasks that normal users would not need to. Such testing backdoors are also often left in the production code, sometimes completely unprotected, sometimes protected with a fixed password stored in the code of the program where it is easy to find, i.e. also unprotected. Testing backdoors may or may not be useful to an attacker depending on the kind of functionality they provide.

Sometimes the backdoors are introduced with an explicit task of gaining access to the program surreptitiously. These are often very powerful tools that allow full access to all functionality of the program and sometimes add other functions that are not even available at the regular user interface. When talking about security and encryption products, such backdoors could allow unauthorized access, impersonation of other users, man-in-the-middle attacks, collection of keys, passwords and other useful information among other things.

The idea of the proponents of introducing backdoors into security and encryption software is that we could introduce such backdoors to the encryption and other tools used by general public. Then, the access to those backdoors would only be available to the police, justice department, secret services, immigration control and drug enforcement agencies… did I miss any? Maybe a few more agencies would be on the list but they are all well behaved, properly computer security trained and completely legal users. And that access would allow them to spy on the people using the tools in case those people turn out to be terrorists or something. Then the backdoors would come in really handy to collect the evidence against the bad guys and perhaps even prevent an explosion or two.

2015-07-19-image-5The problem with this reasoning is that it assumes too much. The assumptions include:

  1. The existence and the access to the backdoors will not be known to the “bad guys”. As the practice shows, the general public and the criminal society contain highly skilled people who can find those backdoors and publish (or sell) them for others to use. Throughout the computer history every single backdoor was eventually found and publicized. Why would it be different this time?
  2. The “bad guys” will actually use the software containing the backdoors. That’s a big assumption, isn’t it? If those guys are clever enough to use encryption and other security software, why would they use something suspicious? They would go for tools that are well known to contain no such loopholes, wouldn’t they?
  3. The surveillance of everyone is acceptable as long as sometimes one of the people under surveillance is correctly determined to be a criminal. This whole preceding sentence is by itself the subject of many a fiction story and movie, “Minority Report” as an example comes to mind. The book “Tactical Crime Analysis: Research and Investigation” might be a good discussion of problems of predicting crime in repeat offenders, now try applying that to first-time offenders – you get literally random results. Couple that with the potential for abuse of collected surveillance data… I don’t really even want to think about it.

So we would en up, among other things, with systems that can be abused by the very “bad guys” that we are trying to catch while they use other, trustworthy, software and the surveillance results on the general population are wide open to abuse as well. I hope this is sufficiently clear now.

Whenever you think of “backdoors”, your knee-jerk reaction should be “remove them”. Even for testing, they are too dangerous. If you introduce them in the software on purpose… pity the fool.

TrueCrypt

truecryptSince the anonymous team behind TrueCrypt has left the building, security aware people were left wondering what’s next. I personally keep using TrueCrypt and as long as it works I will keep recommending it.

Recently, Bruce Schneier has raised a few red flags by his strange advice that seems to indicate that he is being paid now for his “services to the community” by parties not so interested in keeping the community secure. One more thing is his advice to switch from TrueCrypt to BitLocker.

The guys that “disappeared” from behind TrueCrypt recommended to switch to BitLocker and that makes BitLocker suspect right away. Moreover, anyone working in security would be right suspecting that BitLocker, coming from Microsoft, would be backdoor-ed. And now Bruce Schneier is coming out and saying that he recommends BitLocker now instead of TrueCrypt? Great. I am not going to trust either.

TrueCrypt for the moment remains the only trustworthy application for disk encryption. There is an effort to make TrueCrypt survive and support newer features of the file systems. I hope it works and we still have some tool to trust in five years from now.

I have also stored the recent versions of TrueCrypt.

Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone)

android-devilAn excellent article by Sven Tuerpe argues that we pay excessive attention to the problems of encryption and insufficient – to the problems of system security. I wholeheartedly agree with that statement. Read the original article: Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone).

Security cannot be based solely on the encryption and encryption only. The system must be built to withstand attacks from outside and from within to be secure. There is a lot of expertise in building secure devices and creating secure software but none of that is used at all in the mobile devices of today. Whether those smartphones and tablets provide encryption or not is simply besides the point in most attack scenarios and for most kinds of usage. We have to get the devices secured in the first place before the discussion of encryption on them would begin to make sense.

TrueCrypt disappears

truecryptQuite abruptly, the TrueCrypt disk encryption tool is no more. The announcement says that the tool is no longer secure and should not be used. The website provides a heavily modified version of TrueCrypt (7.2) that allows one to decrypt the data and export it from a TrueCrypt volume.

Many questions are asked around what actually happened and why, the speculation is rampant. Unfortunately, there does not seem to be any explanation forthcoming from the developers. For the moment, it is best to assume the worst.

My advice would be to not download the latest version, 7.2. Stick to whatever version you are using now if you are using TrueCrypt at all and look for alternatives (although I do not know any other cross-platform portable storage container tools). If you are with 7.1a, the version is still undergoing an independent audit and you may be well advised to wait for the final results.

More on the subject:

Update: there is a Swiss website trucrypt.ch that promises to keep TrueCrypt alive. At the moment, most importantly, they have the full collection of versions of TrueCrypt and all of the source code. There will probably be a fork of TrueCrypt later on.

Cryptography: just do not!

Software developers regularly attempt to create new encryption and hashing algorithms, usually to speed up things. There is only one answer one can give in this respect:

What part of "NO" don't you understand?

Here is a short summary of reasons why you should never meddle in cryptography.

  1. Cryptography is mathematics, very advanced mathematics
  2. There are only a few good cryptographers and cryptanalysts and even they get it wrong most of the time
  3. If you are not one of them, never, ever, ever try to write your own cryptographic routines
  4. Cryptography is a very delicate matter, worse than bomb defusing
  5. Consequently you must know that most usual “cryptographic” functions are not
  6. Even when it is good, cryptography is too easy to abuse without knowing it
  7. Bad cryptography looks the same as good cryptography. You will not know whether cryptography is broken until it is too late

So, I hope you are sufficiently convinced not to create your own cryptographic algorithms and functions. But we still have to use the cryptographic functions and that is no picknick either. What can mere mortals do to keep themselves on the safe side?

Additional information:

Speaking of passwords…

Wouldn’t it be quite logical to talk about passwords after user names? Most certainly. Trouble is, the subject is very, very large. Creating, storing, transmitting, verifying, updating, recovering, wiping… Did I get all of it? It is going to take a while to get through all of that, do you reckon? Let’s split the subject and talk about password storage now, as the subject that comes most often in the security discussions and in the news.

Speaking of which, some recent break-ins if you were not keeping track:

"Enter Password"LinkedIn  – 6.5 million passwords stolen, Yahoo – 450 thousand passwords stolen, Android Forums – 1 million, Last.fm – 8 million, Nvidia – 400 thousand, eHarmony – 1.5 million, Billabong – 21 thousand, TechRadar … the list is going on and on.

Out of 8 million passwords in LinkedIn and Last.fm breach, “It took a user on the forum less than 2½ hours to crack 1.2 million of the hashed passwords, Ars Technica reported.”

Oops. Is that supposed to be so easy? Actually… no.

There are few easy rules for storing the passwords. First of all, never store passwords in clear, unencrypted, like Billabong did. You remember that any and every system was or will eventually be broken into. You have to assume that your password database will fall into wrong hands sooner or later. Your password database has to be prepared for that eventuality to look good in the eyes of the press.

So, when your password database is in the hands of the attackers, it has to defend itself. A database full of unencrypted passwords does not provide any defense of course. What about an encrypted database?

Well, since you have to be able to use the database, you have to decrypt it when you need it. So the system will have the key to the database somewhere. Since the attacker got hands onto the database, there is no reason why the attacker should not get the encryption keys at the same time. So this is definitely not improving the situation.

Secure hashes (as in the name of this blog) are the ultimate answer. The important thing about the hashes is that they do not require a use of a key and they can be easily computed only one way: from the clear piece of information into the hash. They cannot be reversed, one cannot easily compute the original piece of information from the hash. That’s why they are called one-way hashes.

The hashes were invented a long time ago and they were improving over the years. The old hashes are not secure anymore with the increases in the computing power. That’s what they talked about when they referred to recovering the plain text passwords – they computed passwords that will result in the hash that is in the database.

Finding the passwords then given a database of password hashes boils down to taking a password, computing its hash according to the algorithm used, and comparing it to the hashes stored in the database. When a match is found – we have a good password. This is where the cost of computing the hashes comes in. Older hashes are much faster, newer hashes are much slower. With the advent of rental cloud computing services this is becoming a small distinction though. All SHA-1 passwords of up to 6 characters in length could be brute forced in 49 minutes with the help of Amazon EC2 for a cost of $2 two years ago. And it’s getting cheaper and faster. So here is where the speed matters but it has the opposite effect. The hash, to be secure, must be a very, very slow one. Almost too slow to be useful at all would be a good start.

Even if the computer systems weren’t getting blistering fast compared to the blistering fast of five years ago all the time, a workaround was figured a long time ago. If you are prepared to invest in some large storage, you can compute slowly but surely an enormous amount of hashes and keep them somewhere. When the time comes, you just have to go and compare the hashes you computed in advance to the given hashes in the password database. This is called using rainbow tables. And it’s bloody effective.

Ok, ok, it is not all that gloomy. This fight is an old one and we have defenses. A very effective measure against the rainbow tables is to use a cryptographic salt. A salt is an additional piece of data supplied to the hash function together with the password. Since the attacker did not know the salt in advance, precomputed rainbow tables suddenly become useless. Great. Unfortunately, many sites use a fixed salt that is generated once and set in stone. This effectively makes rainbow tables useful again. One just has to compute them once with that salt again for the whole database. So the salt, to be useful, must be generated new for every password and stored together with the password.

So, finally, the answer is simple: a cryptographically strong, contemporary (as in “very, very slow”) one-way hash function with a randomly generated salt for every password. And anything deviating from that is just plain tomfoolery.