User Data Manifesto

Having a confirmation that the governments spy on people on the Internet and have access to the private data they should not sparked some interesting initiatives. One of such interesting initiatives is the User Data Manifesto:

1. Own the data
The data that someone directly or indirectly creates belongs to the person who created it.

2. Know where the data is stored
Everybody should be able to know: where their personal data is physically stored, how long, on which server, in what country, and what laws apply.

3. Choose the storage location
Everybody should always be able to migrate their personal data to a different provider, server or their own machine at any time without being locked in to a specific vendor.

4. Control access
Everybody should be able to know, choose and control who has access to their own data to see or modify it.

5. Choose the conditions
If someone chooses to share their own data, then the owner of the data selects the sharing license and conditions.

6. Invulnerability of data
Everybody should be able to protect their own data against surveillance and to federate their own data for backups to prevent data loss or for any other reason.

7. Use it optimally
Everybody should be able to access and use their own data at all times with any device they choose and in the most convenient and easiest way for them.

8. Server software transparency
Server software should be free and open source software so that the source code of the software can be inspected to confirm that it works as specified.

Password recovery mechanisms – Part 3

Passwords remain the main means of authentication on the internet. People often forget their passwords and then they have to recover their access to the website services through some kind of mechanism. We try to make that so-called “password recovery” simple and automated, of course. There are several ways to do it, all of them but one are wrong. Let’s see how it is done.

If you did not read Part 1 – Secret questions and Part 2 – Secondary channel, I recommend you do so before reading on.

Part 3 – Example procedure: put it all together

Security - any lock matters as much as any other.

Let’s assume we are putting together a website and we will have passwords stored in a salted hash form and we identify the users with their e-mail address. I will describe what I think a good strategy for password recovery then is and you are welcome to comment and improve upon.

Since we have the users’ e-mail addresses, that is the natural secondary authentication channel. So if a user needs password recovery, we will use their e-mail to authenticate them. Here is how.

The user will come to a login page and clicks the link for “forgot password” or similar. They have to provide then an e-mail address. The form for e-mail address submission has to have means of countering automated exhaustive searches to both lower the load onto the server in case of an attack and provide some small level of discouragement against such attacks. There are two ways that come to mind: using a CAPTCHA and slowing down the form submission with a random large (an order of seconds) delay. Let’s not go into the holy war on CAPTCHA, you are welcome to use any other means you can think of and, please, suggest them so that others can benefit from your thoughts here. You should also provide an additional hidden field that will be automatically filled in by an automated form scanning robot, so you can detect that too and discard the request. Anyway, the important part is: slow down the potential attacker. The person going through recovery will not mind if it takes a while.

As the next step, we will look up the user e-mail address in the database, create a random token, mail it out and provide the feedback to the user. The feedback should be done in constant time, so that an attacker does not use your recovery mechanism to collect valid e-mail addresses from your website. The process thus should take the same time whether you found the user or not. This is difficult to get right and the best solution is to store the request for off-line processing and return immediately. Another way is to use the user names instead and look up the e-mail address but a user is more likely to know their own e-mail address than remember their user name, so there is a caveat. If you cannot (or would not) do off-line processing of requests, you should at least measure your server and try to get the timing similar with delays. The timing of the server can be measured fairly precisely and this is difficult to get right, especially under fluctuating load but you must give it a try. Still, it’s best if you keep the submitted information and trigger an off-line processing while reporting to user something along the lines of “if your e-mail is correct, you will receive an automated e-mail with instructions within an hour”. The feedback should never say whether the e-mail is correct or not.

Now we generate a long, random, cryptographically strong token. It must be cryptographically strong because the user may actually be an attacker and if he can guess how we generate tokens and can do the same, he will be able to reset passwords for arbitrary users. We generate the token, format it in a way that can be e-mailed (base64 encoding, hex, whatever) and store it in a database together with a timestamp and the origin (e-mail address). The same token is then e-mailed to the e-mail address of the user.

The user receives the token, comes to the website and goes to the form for token verification. Here he has to enter his e-mail address again, of course, the token, and the new password. In principle, some measure against the automated searches is in order here too, to lower the load on the server in case of an attack. The tokens are verified against our database and then the e-mail is checked too. If we see a token, we remove it from the database anyway, then we check if the e-mail matches and we continue only if it does. This way, tokens are single use: once we see a token, it is removed from the database and cannot be used again.

Tokens also expire. We must have a policy at our server that sets the expiration period. Let’s say, that is 24 hours. Before we do any look up in our token database, we perform a query that removes all tokens with a creation timestamp older than 24 hours ago. That way, any token that expires is gone from the database when we start looking.

Well, now, if the token matches and e-mail is correct, we can look up the user in our passwords database and update the password hash to the new value. Then, flush the authentication tokens and session identifiers for the user, forcing logout of all preexisting sessions. Simple.

Password recovery mechanisms – Part 2

Passwords remain the main means of authentication on the internet. People often forget their passwords and then they have to recover their access to the website services through some kind of mechanism. We try to make that so-called “password recovery” simple and automated, of course. There are several ways to do it, all of them but one are wrong. Let’s see how it is done.

If you did not read Part 1 – Secret questions, I recommend you do so before reading on.

Part 2 – Secondary channel

A second way to do recovery is to use a secondary channel for authentication. Once authenticated on this secondary channel, the password for the primary channel can be changed. The secondary channel may be slower and more cumbersome but since it is used rarely it is not a problem.

You could ask the person to call user support. The user support would ask some questions for personal information and compare the answers with what they have on file. That would effectively reduce the system to the “secret questions” described in Part 1. There are better (and cheaper) ways to do it.

Historically, the server usually stores the e-mail address of the user provided at registration. That is what becomes the secondary channel. Although it is still over the Internet, but capturing the e-mails on their way to the intended recipient is not a trivial task unless you control one of the nodes through which the e-mail would be routed.

Originally the passwords were stored in plaintext at the server and the user could request the password to be e-mailed. Some services still operate like that. The notorious Mailman list server e-mails you your plaintext password once a month in case you forgot it. That is a convenient way but has a bit of a security problem, of course. Should the password database be recovered by an attacker, all passwords to all accounts are immediately known. On the other hand, it has the advantage that user passwords are not really changed, so if someone requests a password reminder, the original subscriber will receive an e-mail and that’s all.

The inventive thought then went to the idea of hashing the passwords for storage, which is a great idea in itself and protects the passwords in case the database gets stolen. It has a side effect that suddenly the password is not known to the server anymore. Only the hash is. That is sufficient for the authentication but isn’t very helpful if you want to mail out a password reminder. So, someone had a bright idea that the password reminder should become a password reset. And what they did is: when a user requests, the server generates a new password, sends it to the user, and changes the hash in the database to the new password’s hash. All secure and … very prone to the denial of service attacks. Basically, anyone may now request a password reset for any users at will and that user’s password will get changed. Very annoying.

So we went further and decided that changing the password is not such a good idea. What we do then is make a separate database of single-use tokens. When a user requests a password change, we generate a unique random token, keep the token in the database and send it out to the user. If user did not request a token, the user need not react, the password was not changed and the token will harmlessly expire some time later. When the user needs a password change, he provides the token back to the service in a password change form (or through a clicked URL) and that allows us to perform this secondary authentication and then change the primary password. And that’s the way to do it.

There are variations where the secondary channel can be an SMS, an automated telephone call, or even an actual letter from the bank. But the important thing is that those messages only provide a token that verifies your identity on the secondary channel before allowing a security relevant operation on the primary channel.

Next, we will look at an example procedure for a website in Part 3.

Password recovery mechanisms – Part 1

Passwords remain the main means of authentication on the internet. People often forget their passwords and then they have to recover their access to the website services through some kind of mechanism. We try to make that so-called “password recovery” simple and automated, of course. There are several ways to do it, all of them but one are wrong. Let’s see how it is done.

Part 1 – Secret questions

A widespread mechanism is to use so-called “secret questions”. This probably originates with the banks and their telephone service where they ask you several questions to compare your knowledge of personal information with what they have on file. In the times before the internet this was a fair mechanism since coming up with all the personal information was a tough task that often required physically going there and rummaging through the garbage cans to find out things. Still, some determined attackers would do precisely that – dumpster diving – and could gain access to the bank accounts even in those times.

Right now this mechanism is, of course, total fallacy. The internet possesses so much information about you … It is hard to imagine that questions about your private life would remain a mystery to an attacker for long. Your birthday, your dog, your school and schoolmates, your spouse and your doctor – they are all there. It is hard to come up with a generic question that would be suitable to everyone and at the same time would not have the answer printed on your favorite social network page.

And even if it is not. Imagine that the secret question is “what’s your dog’s name?” How many dog names are there? Not as many as letter combinations in a password. And the most common dog names are probably only a handful. So it is by far much easier to brute force a security question than a password.

This mechanism of secret questions and answers is antiquated and should not be used.

There is a variation where you have to provide your own question and your own answer. This is not better. Most people will anyway tend to pick up the obvious questions. The attacker will see the question and can dig for information. The answer will usually be that one word that is easy to brute force. So, no good.

And, by the way, what should you do when you are presented with this folly on a website you use? Provide a strong password instead of the answer. Store that password in whichever way you store all the other recovery passwords. All other rules for password management apply.

So much for secret questions. In the next part, we will see how to do password recovery with a secondary channel.

Getting revenue on security?

I am looking now into arguably the hardest problem of security: how to make it pay off. Security is usually seen as a risk management tool, where increasing security investment lowers the risk of costly disasters. But the trade off between security and risk is hard to evaluate and there is a bias for ignoring the rare risks.

We keep talking about costs, if you noticed. We lower costs, even not actual costs, but potential costs, and we do not increase the revenues here.

For example, when we talk about some product we can look at improvements that would get us more of the following to improve the bottom line:

  1. Acquisition – getting more users or clients
  2. Activation – getting the users or clients to make a purchase
  3. Activity – getting your users or clients to come back for more

Can security demonstrate similar improvements? To move from cost cutting to revenue generation? Share your opinion, please!

Exodus from Java

Finally the news that I was subconsciously waiting for: the exodus of companies from Java has started. It does not come as a surprise at all. Java has never fulfilled the promises it had at the beginning. It did not provide any of the portability, security and ease of programming. I am only surprised it took so long, although knowing full well that companies’ managers routinely optimize for their own career and bonuses that does not come as a shock either.

For those not in the know, the gist of the problem is that Java promised at all times to provide some sort of “inherent security”. That is, no matter how bad you write the code, it will still be more secure that the code written in C or other advanced high-level algorithmic languages. Java claimed absence of buffer overflows, null pinter dereferences and similar problems, which all turned out to be not true after all. And it had a very important consequence.

The consequence is that anyone writing in Java or learning it is subconsciously aware of that promise. So people tend to relax and allow themselves to be sloppy. So the code written in C ends up being tighter, more organized, and more secure than the code written in Java. And the developers in C tend to be on average better educated in the intricacies of software development and more aware of potential pitfalls. And that makes a huge difference.

So, the punch line is, if you want something done well, forget Java.

Apple – is it any different?

The article “Password denied: when will Apple get serious about security?” in The Verge talks about Apple’s insecurity and blames Apple’s badly organized security and the absence of any visible security strategy and effort. Moreover, it seems like Apple is not taking security sufficiently seriously even.

“The reality is that the Apple way values usability over all else, including security,” Echoworx’s Robby Gulri told Ars.

MoneyIt is good that Apple gets a bit of bashing but are they really all that different? If you look around and read about all other companies you quickly realize it is not just Apple, it is a common, too common, problem: most companies do not take security seriously. And they have a good reason: security investment cannot be justified in short-term, it cannot be sold, and it cannot be turned into bonuses and raises for the management. And the risks are typically ignored as I already talked about previously.

So in this respect Apple simply follows in the footsteps of all other software companies out there. They invest in features, in customer experience, in brand management but they ignore the security. Even the recent scandal with Mat Honan’s life wipe-out that got a lot of publicity did not change much if anything. The company did not suffer sufficient damage to start thinking of security seriously. The damage was done to a private individual and did not translate into any impact on sales. So it demonstrated once again that security problems do not damage the bottom line. Why else would a company care?

We need the damage done to external parties to be internalized and absorbed by the companies. As long as it stays external they will not care. The same thing exactly as the ecology – ecological cost is external to the company so it would not care unless there is regulation that makes those costs internal. We need a mechanism for internalizing the security costs.

Security training – does it help?

I came across the suggestion to train (nearly) everyone in the organization in security subjects. The idea is very good, we often have this problem that the management has absolutely no knowledge or interest in security and therefore ignores the subject despite the efforts of the security experts in the company. Developers, quality, documentation, product management – they all need to be aware of the seriousness of software security for the company and recognize that sometimes the security must be given priority.

But will it help? I have spent a lot of time educating developers and managers on security. My experience is that it does not help most people. Some people get interested and involved – those are naturally inclined to take good care of all aspects of their products, including but not limited to security. Most people do not care for real. They are not interested in security, they are there to do a job and get paid. And nobody gets paid for more security.

That results repeatedly in the situations like the one described in the article:

“If internal security teams seem overly draconian in an organization, the problem may not be with the security team, but rather a lack of security and risk awareness throughout the organization.”

Unfortunately, simply informative security training is not going to change that. People tend to ignore rare risks and that is what happens to security of product development. What we need is not a security awareness course but a way to “hook” people on security, a way to make them understand, deep inside, that the security is important, not in an abstract way, but personally, to them personally. Then security will work. How do we do that?

Brainwashing in security

At first, when I read the article titled Software Security Programs May Not Be Worth the Investment for Many Companies I thought it was a joke or a prank. But then I had a feeling it was not. And it was not the 1st of April. And it seems to be a record of events at the RSA Conference. Bloody hell, that guy, John Viega from SilverSky, “an authority on software security”, is speaking in earnest.

That’s one of those people who are continuously making the state of security as miserable as it is today. His propaganda is simple: do not invest into security, it is a total waste of money.

“For most companies it’s going to be far cheaper and serve their customers a lot better if they don’t do anything [about security bugs] until something happens. You’re better off waiting for the market to pressure on you to do it.”

And following the suit was the head of security at Adobe, Brad Arkin, can you believe it? I am not surprised now we have to use things like NoScript to make sure their products do not execute in our browsers. I have a pretty good guess what Adobe and SilverSky are doing: they are covering their asses. They do the minimum required to make sure they cannot be easily sued for negligence and deliberately exposing their customers. But they do not care about you and me, they do not give a damn if their software is full of holes. And they do not deserve to be called anything that has a word ‘security’ in it.

The stuff Brad Arkin is pushing at you flies into the face of the very security best practices we swear by:

“If you’re fixing every little bug, you’re wasting the time you could’ve used to mitigate whole classes of bugs,” he said. “Manual code review is a waste of time. If you think you’re going to make your product better by having a lot of eyeballs look at a lot of code, that’s the worst use of human labor.”

No, sir, you are bullshitting us. Your company does not want to spend the money on security. Your company is greedy. Your company wants to get money from the customers for the software full of bugs and holes. You know this and you are deliberately telling lies to deceive not only the customers but even people who know a thing or two about security. But we have seen this before already and no mistake.

 

The problem is, many small companies will believe anything that is pronounced at an event like the RSA Conference and take it for granted that this is the ultimate last word in security. And that will make the security state of things even worse. We will have more of those soft underbelly products and companies that practice “security by ignorance”. And we do not want that.

The effect of security bugs can be devastating. The normal human brain is not capable of properly estimating the risks of large magnitude but rare occurrence and tends to downplay them. That’s why the risk of large security problems that can bring a company to its knees is usually discarded. But security problems can be so severe that they will put even a large company out of business, not to mention that a small company would not survive any slightly more than average impact security problem at all.

So, thanks, but no, thanks. Security is a dangerous field, it is commonly compared to a battlefield and there is some truth in that. Stop being greedy and make sure your software does not blow up.

The art of unexpected

Expecting the unexpected?I am reading through the report of a Google vulnerability on The Reg and laughing. This is a wonderful demonstration of what the attackers do to your systems – they apply the unexpected. The story is always the same. When we build a system, we try to foresee the ways in which it will be used. Then the attackers come and use it in a still unexpected way. So the whole security field can be seen as an attempt to expand the expected and narrow down the unexpected. Strangely, the unexpected is always getting bigger though. And, conversely, the art of being an attacker is an art of unexpected, shrugging off the conventions and expectations and moving into the unknown.

Our task then is to do the impossible, to protect the systems from unexpected and, largely, unexpectable. We do it in a variety of ways but we have to remember that we are on the losing side of the battle, always. There is simply no way to turn all unexpected into expected. So this is the zen of security: protect the unprotectable, expect the unexpectable and keep doing it in spite of the odds.

Posts navigation

1 2