Strategy towards more IT security: the road paved with misconceptions

The strategy towards more IT security in the “Internet of Things” is based a little more than entirely on misconceptions and ignorance. The policy makers simply reinforce each other’s “ideas” without any awareness of where the road they follow is leading.

As I listened on in the K-ITS 2014 conference, it became painfully obvious that most speakers should not be speaking at all. They should be listening. The conference is supposed to discuss the strategies towards more IT security in the future industry that will have both factories and cars connected to the Internet. That future isn’t bright, far from. We are fighting battles on the internet for the web servers, personal computers and mobile phones now. We will be fighting battles for refrigerators, nuclear power plants and medical implants in the near future. We definitely need to have some better ideas for those battle plans. Instead, we hear, if anything, the ideas on improving the attitudes of buyers, i.e. “how can we convince the customers that our security is okay and they should pay more?”

I detail here five different misconceptions that were very obvious and widespread in the conference. Even security management at the top level shares this, though they should know better. And the worst part is, they all seem to believe that it will be all right if they throw some important sounding names and acronyms at it.

iot-140113.bigdata

Divide security into “levels”

A prominent theme is the division of the industrial landscape into various “areas” of differing security requirements. There is nothing wrong with the concept itself, of course, except that it is applied in a context where it will do more harm than good.

The policy makers seem to think that they can divide the industry into ‘critical infrastructure’, ‘things that need security’, and ‘things that do not need security’. Right, for the sake of an argument, assume we can. Then what? And then, they say, we will invest in security where it matters most. That, on the surface, looks like a sound plan.

The problems start when you try to apply the said concept to the software development. How do we distinguish between software written for ‘secure’ and ‘insecure’ applications? How do we make authors of libraries and tools to write their software to the highest standards to satisfy the ‘most secure’ part of the industry? What about the operating systems they use? What about people that wander from one company to another, bringing not only expertise but mistakes and security holes with them?

Once you start thinking about this approach in practical terms, it quickly becomes untenable.

The only way to improve the security of any software is to improve the security level of the whole software industry. The software not written specifically for a high security environment will end up there whether we want it or not. Developers not skilled and not trained for writing secure software will. It’s unavoidable.

But that is only one side of the problem. Why have the division in the first place? Yes, critical infrastructure is critical, but that stupid mirror with a network interface will also end up in a secure facility and how do we know what the next attack path will look like? The noncritical infrastructure will be used to attack critical infrastructure, isn’t it obvious? All infrastructure, all consumer devices need protection if we want to have a secure Internet of Things.

The software for all purposes is written by the same underpaid people that never had proper security education everywhere. The general tendency for software quality and security is, unfortunately, to get worse. As it gets worse everywhere it does, of course, get worse for the critical infrastructure as well as for consumer electronics.

Investment should be done into the state of software in general, not into the state of some particular software. Otherwise, it won’t work.

Security should not prevent innovation

Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

‘Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

iot-construction-c13-3We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Convince the customer

iot-fridgeWe are bent on convincing the customer that things are secure. Not making things secure but convincing everyone around that we are fine. Engaging in plays of smoke and mirrors that is. Instead of actually making things better we announce that pretending things are better will somehow make them better. And we try and succeed to convince ourselves that this is okay somehow.

Well, it is not okay. We all understand the desire of commercial companies to avoid security publicity. We know that eventually people do catch up anyway. There is such a rush to convince everyone and their grandma that things are going to be better precisely because people will be catching up on this foul play soon.

The market will shrink if people think that there are security problems but the market will crash when people find out they were lied to and your words are not worth the electrons they use to come across the internet. The deception of ourselves will lead to a disaster and we have no way of controlling that. This is simply a fast track to security by obscurity.

Secure components mean secure systems

There is a commonly shared misconception that using secure components will somehow automatically lead to secure systems. When confronted with this question directly, people usually quickly realise their folly and will likely fervently deny such thinking but it is sufficient to listen to a presentation to realise that that is exactly the assumption behind many plans.

Secure components are never secure unconditionally. They are what we call conditionally secure. They are secure as long as a certain set of assumptions remains valid. Once an assumption is broken, not met, the component is not any longer secure. Who checks for those assumptions? Who verifies whether the developers upheld all of the assumptions that the developers of underlying components specified? Who checks what assumptions remained undocumented?

When we combine the components together we create a new problem, the problem of composition. This is not an easy problem at all. By having two secure components put together, you don’t automatically obtain a secure system. It may well be. Or it may be not.

This problem of secure composition is well known to the developers and auditors of smart cards. And they do not claim to have a solution. And here we are, developers of systems orders of magnitude more complex, dismissing the problem out of our minds like if it’s not even worth our consideration. That’s a folly.

We need those things on the internet

Who said that factories need to be on the internet? Who said that every single small piece of electronics or an electric device really needs to be on the internet? Why do we think that having all of those things “talk” to each other would make us all suddenly happy?

The industry and the governments do not want to deal with any of the real problems plaguing the societies world over. Instead, they want to produce more and more useless stuff that allows them to appear like if they do something useful. They will earn lots of money and waste a lot more resources in the progress. Should they be worried?

iot_talking_carsTake “smart cars”, for example, cars that communicate to each other over some wireless protocol to tell about accidents, road condition, traffic jams. Think about it. A car cannot communicate very far away. On a highway, by the time you get news of a traffic jam from your neighbour cars, you will be standing in it. In the city, this information will be equally useless, because you will see the traffic jam and do what you always did: turn around and go look for another street around the block. What of accidents? Again, that information is not much use to you in the city, where you basically don’t need it. They say, cars will inform each other of the accidents but this information cannot be transmitted too far away. By the time your car has information about an accident on the highway ahead, displays it and you read it, you will be staring at it. The civil engineers are not that stupid, you know. They make highways so that you have enough time to see what is around the corner and react. Extra information would only distract the driver there. So this whole idea is completely useless from the point of view of driving but it will require enormous resources and some genius security solutions to artificially created problems.

And all of it is like that. We don’t need an “internet of things” in the first place. We should restrict what gets on the internet, not encourage the uncontrollable proliferation of devices arbitrarily connected to the network simply to show off. Yes, we can. But should we?

Software Security vs. Food Safety

My friend works in a large restaurant chain in St-Petersburg. She is pretty high up in the command after all these years. We talk about all sorts of things when we meet up and once she told me about how they have to deal with safety and quality inspections and how bothersome and expensive they are. So I teased her: why don’t they just pay off the officials to get a certificate instead of going through all the trouble? And she answered seriously that that was only good for a fly-by-night business that does not care about clients or reputation.

Food Poisoning
Food-poisoning a customer would have severe consequences. Their chain has been around for more than ten years, she said, and they do not want to risk any accidents to destroy their reputation and client base. In the long run, she said, they are better off establishing the right procedures and enduring the audits that will help them to protect the health and safety of their clients. And she named three kinds of losses that they are working hard to prevent: direct losses from accidents, loss of customers as the result of a scare and a long-term loss of customer base as the result of reputation and trust decay.

I think there is something we could learn. The software industry has become completely careless in the recent years and the protection of the customer is something so far down the to-do list you can’t even see it. Judging by the customer care, most businesses in the software industry are fly-by-night. And if they are not, what makes them behave like if they are? Is there some misunderstanding in the industry that the security problems do not cause costs, perhaps? Evidently, the companies in the software industry do not care about moral values but let us see how the same three kinds of losses apply. Maybe I can convince you to rethink the importance of customer care for the industry?

Sony PlayStation Network had an annual revenue of $500 million in 2011, with about 30% margin, giving them a healthy $150 million in profit a year. That year, their network was broken into and hackers stole thousands of personal records, including credit card numbers. In the two weeks of the accident Sony has lost about $20 million, or 13% of their profit. When the damage compensations of $171 million kicked in, the total shot to about $191 million, making Sony PlayStation Network lose over $40 million that year. Some analysts say that the long-term damages to the company could run as high as $20 billion. How would you like to work for 40 years just to repay a single security accident? And Sony is a large company, they could take the hit. A smaller company might have keeled over.

security-2014-01-31-01

And these kinds of things can come completely unexpected from all sorts of security accidents. Thanks to the governments’ pressure we hear about companies suffering financial disadvantage from incidents that used to be ignored. The US Department of Health & Human Services has fined Massachusets Eye & Ear $1.5 million for the loss of a single laptop that contained unencrypted information. “Oops” indeed. The same year UK Information Commissioner’s Office fined Welcome Financial Services Ltd. &150,000 for the loss of two backup tapes. Things are heating up.

Now, the Sony PlayStation Network breach did not only cost Sony money. The company Brand Index, specializing on measuring the company image in the game circles, determined that that year Sony PlayStation image became negative for the first time in the company’s history. The gamers actively disliked the Sony brand after the accident. That was enough to relegate Sony from the position of a leader in the gaming industry to “just a member of the pack”.

More interesting tendencies could be seen in the retail industry. TJX, the company operating several large retail chains, suffered a breach back in 2005, when hackers got away with 45 million credit card records. At that time, the analysts were predicting large losses of sales that never materialized. TJX paid $10 million in settlements of charges and promotion and the sales did not dip.

Fast forward to December 2013, now Target suffers a security breach where 70 million customer records and 40 million credit card numbers are stolen. Target did not appear to be too worried and engaged in the familiar promotion and discount offering tactics. And then the inconceivable happened. The customers actually paid attention and walked away. The total holiday spending dropped by 9.4%, sales were down 1.5% despite 10% discounts and free credit monitoring offerings from the company. As the result, the company’s stock dropped 1.8%. In the scale of Target, we are talking about billions upon billions of dollars.

security-2014-01-31-02

So, what happened? In 2005, the industry worried but customers did not react. In 2013, the industry habitually did not worry, but customers took notice. Things are changing, even in the market and industry where software security was never of any interest to either shops or customers. People are starting to pay attention.

Now, if we talk about customer trust and industry image, the food industry serves as a pretty good role model. They have a lot of experience stretching back hundreds of years, they must have figured out a few useful things we could think of applying to our software industry. Take the dioxin scare of 2011. The tracing abilities of the food industry allowed them to find easily how the industrial oil got into animal feed and traced it to particular farms. Right away, the chickens and pigs were mercilessly culled at those farms. That’s what we call an accident response all right. In the aftermath, the food industry installed a regulation to require physical separation of industrial and food oil production and created a requirement for the labs to publish Dioxins findings in food samples immediately.

The food industry has learned that they will not be perceived well if they kill their customers. They are making an effort to establish long-term trust. That’s why they have good traceability, they are merciless in their accident response and they quickly establish new rules that help to improve customer confidence. Take the story of the horse meat fraud in 2013, where horse meat was sold as beef across Europe. That was not dangerous for health, that was a fraud to sell cheaper meat instead of more expensive. The food industry traced it all back to origin and found out that the liability for this kind of fraud was insufficient. That even after paying the fines the companies that engaged in this fraud were making a handsome profit. But customer confidence suffered immensely. And the industry took a swift action, the proposal to increase the penalties and take tougher measures was already accepted by the European Parliament on the 14th of January.

What can we learn from the food industry? They have great traceability of products, detection of all sorts of misbehavior and dangerous agents, requirements to publish data. The penalties are kept higher than potential gain and the response is swift and merciless: either recall or destruction of contaminated goods. All of this taken together helps the industry to keep their customers’ trust.

Try to imagine that HTC was required to recall and destroy all those millions of mobile phones that were found to have multiple security vulnerabilities in 2012. Well, HTC did not waltz away easily as happened in so many cases before. They had to patch up those millions of mobile phones, pass an independent security audit every two years, and, perhaps most telling, they are obliged to tell truth and nothing but the truth when it comes to security.

And this kind of thing will happen more and more often. The customers and governments take interest in security, they notice when something goes wrong and we have a big problem on our hands now, each company individually and the industry as a whole. We will get more fines, more orders to fix things, more new rules imposed and so on. And you know what? It will all go fast, because we always claim that software is fast, it is fast to produce software, make new technology, the innovation pace and all that. People and organizations are used to thinking about he software industry as being fast. So we will not get much advanced notice. We will just get hit with requirements to fix things and fix them immediately. I think it would do us good to actually take some initiative and start changing things ourselves at the pace that is comfortable to ourselves.

Or do we want to sit around and wait the crisis to break out?

Google bots subversion

There is a lot of truth in saying that every tool can be used by good and by evil. There is no point in blocking the tools themselves as the attacker will turn to new tools and subvert the very familiar tools in unexpected ways. Now Google crawler bots were turned into such a weapon to execute SQL injection attacks against websites chosen by attackers.

it_photo_76483_200x133The discussion of whether Google should or should not do anything about that is interesting but we are not going to talk about that. Instead, think that this is a prime case of a familiar tool that comes back to your website regularly subverted into doing something evil. You did not expect that to happen and you cannot just block the Google from your website. This is a perfect example of a security attack where your application security is the only way to stop the attacker.

The application must be written in such a way that it does not matter whether it is protected by a firewall – you will not always be able to block the attacks with the firewall. The application must also be written so that it withstands an unanticipated attack, something that you were not able to predict in advance would happen. The application must be prepared to ward off things that are not there yet at the time of writing. Secure design and coding cannot be replaced with firewalls and add-on filtering.

Only such securely designed and implemented applications withstand unexpected attacks.

Password recovery mechanisms – Part 3

Passwords remain the main means of authentication on the internet. People often forget their passwords and then they have to recover their access to the website services through some kind of mechanism. We try to make that so-called “password recovery” simple and automated, of course. There are several ways to do it, all of them but one are wrong. Let’s see how it is done.

If you did not read Part 1 – Secret questions and Part 2 – Secondary channel, I recommend you do so before reading on.

Part 3 – Example procedure: put it all together

Security - any lock matters as much as any other.

Let’s assume we are putting together a website and we will have passwords stored in a salted hash form and we identify the users with their e-mail address. I will describe what I think a good strategy for password recovery then is and you are welcome to comment and improve upon.

Since we have the users’ e-mail addresses, that is the natural secondary authentication channel. So if a user needs password recovery, we will use their e-mail to authenticate them. Here is how.

The user will come to a login page and clicks the link for “forgot password” or similar. They have to provide then an e-mail address. The form for e-mail address submission has to have means of countering automated exhaustive searches to both lower the load onto the server in case of an attack and provide some small level of discouragement against such attacks. There are two ways that come to mind: using a CAPTCHA and slowing down the form submission with a random large (an order of seconds) delay. Let’s not go into the holy war on CAPTCHA, you are welcome to use any other means you can think of and, please, suggest them so that others can benefit from your thoughts here. You should also provide an additional hidden field that will be automatically filled in by an automated form scanning robot, so you can detect that too and discard the request. Anyway, the important part is: slow down the potential attacker. The person going through recovery will not mind if it takes a while.

As the next step, we will look up the user e-mail address in the database, create a random token, mail it out and provide the feedback to the user. The feedback should be done in constant time, so that an attacker does not use your recovery mechanism to collect valid e-mail addresses from your website. The process thus should take the same time whether you found the user or not. This is difficult to get right and the best solution is to store the request for off-line processing and return immediately. Another way is to use the user names instead and look up the e-mail address but a user is more likely to know their own e-mail address than remember their user name, so there is a caveat. If you cannot (or would not) do off-line processing of requests, you should at least measure your server and try to get the timing similar with delays. The timing of the server can be measured fairly precisely and this is difficult to get right, especially under fluctuating load but you must give it a try. Still, it’s best if you keep the submitted information and trigger an off-line processing while reporting to user something along the lines of “if your e-mail is correct, you will receive an automated e-mail with instructions within an hour”. The feedback should never say whether the e-mail is correct or not.

Now we generate a long, random, cryptographically strong token. It must be cryptographically strong because the user may actually be an attacker and if he can guess how we generate tokens and can do the same, he will be able to reset passwords for arbitrary users. We generate the token, format it in a way that can be e-mailed (base64 encoding, hex, whatever) and store it in a database together with a timestamp and the origin (e-mail address). The same token is then e-mailed to the e-mail address of the user.

The user receives the token, comes to the website and goes to the form for token verification. Here he has to enter his e-mail address again, of course, the token, and the new password. In principle, some measure against the automated searches is in order here too, to lower the load on the server in case of an attack. The tokens are verified against our database and then the e-mail is checked too. If we see a token, we remove it from the database anyway, then we check if the e-mail matches and we continue only if it does. This way, tokens are single use: once we see a token, it is removed from the database and cannot be used again.

Tokens also expire. We must have a policy at our server that sets the expiration period. Let’s say, that is 24 hours. Before we do any look up in our token database, we perform a query that removes all tokens with a creation timestamp older than 24 hours ago. That way, any token that expires is gone from the database when we start looking.

Well, now, if the token matches and e-mail is correct, we can look up the user in our passwords database and update the password hash to the new value. Then, flush the authentication tokens and session identifiers for the user, forcing logout of all preexisting sessions. Simple.

Coverity reports on Open Source

Coverity is running a source code scan project started by U.S. Department of Homeland Security in 2006, a Net Security article reports. They published their report on quality defects recently pointing out some interesting facts.

Coverity is a lot into code quality but they also report security problems. On the other hand, any quality problem is easily a security problem under the right (or, rather, wrong) circumstances. So the report is interesting for its security implications.

The Open Source is notably better at handling quality than corporations. Apparently, the corporation can achieve the same level of quality as Open Source by going with Coverity tools. An interesting marketing twist, but, although the subject of Open Source superiority has been beaten to death, this deals the issue another blow.

Another interesting finding is that the corporations only get better at code quality after the size of the project goes beyond 1 million of lines of code. This is not so surprising and it is good to have some data backing up the idea that corporate coders are either not motivated or not professional to write good code without some formalization of the code production, testing and sign-off.

This is the necessary evil that hinders productivity at first but ensures an acceptable level of quality later.

Cryptography: just do not!

Software developers regularly attempt to create new encryption and hashing algorithms, usually to speed up things. There is only one answer one can give in this respect:

What part of "NO" don't you understand?

Here is a short summary of reasons why you should never meddle in cryptography.

  1. Cryptography is mathematics, very advanced mathematics
  2. There are only a few good cryptographers and cryptanalysts and even they get it wrong most of the time
  3. If you are not one of them, never, ever, ever try to write your own cryptographic routines
  4. Cryptography is a very delicate matter, worse than bomb defusing
  5. Consequently you must know that most usual “cryptographic” functions are not
  6. Even when it is good, cryptography is too easy to abuse without knowing it
  7. Bad cryptography looks the same as good cryptography. You will not know whether cryptography is broken until it is too late

So, I hope you are sufficiently convinced not to create your own cryptographic algorithms and functions. But we still have to use the cryptographic functions and that is no picknick either. What can mere mortals do to keep themselves on the safe side?

Additional information:

Car software security

I stumbled across an article on car software viruses. I did not see anything unexpected really. The experts “hope” to get it all fixed before the word gets out and things start getting messy. Which tells us that things are in a pretty bad shape right now. The funny thing is though that the academic group that did the research into vehicle software security was disbanded after working for two years and publishing a couple of damning papers, demonstrating that “the virus can simultaneously shut off the car’s lights, lock its doors, kill the engine and release or slam on the brakes.” An interesting side note is that the car’s system is available to “remotely eavesdrop on conversations inside cars, a technique that could be of use to corporate and government spies.” This goes in stark contrast to what car manufactures are willing to disclose: “I won’t say it’s impossible to hack, but it’s pretty close,” said Toyota spokesman John Hanson. Basically, all you can hope for is that they are “working hard to develop specifications which will reduce that risk in the vehicle area.” I don’t know, mate, I think I better stay with the good old trustworthy mechanic stuff. I guess I know too much about software security for my own good. I kinda feel they will be inevitably hacked. Scared? If there is a manual override for everything – not so much but… The second-hand car market suddenly starts looking very appealing by comparison…