Brainwashing in security

At first, when I read the article titled Software Security Programs May Not Be Worth the Investment for Many Companies I thought it was a joke or a prank. But then I had a feeling it was not. And it was not the 1st of April. And it seems to be a record of events at the RSA Conference. Bloody hell, that guy, John Viega from SilverSky, “an authority on software security”, is speaking in earnest.

That’s one of those people who are continuously making the state of security as miserable as it is today. His propaganda is simple: do not invest into security, it is a total waste of money.

“For most companies it’s going to be far cheaper and serve their customers a lot better if they don’t do anything [about security bugs] until something happens. You’re better off waiting for the market to pressure on you to do it.”

And following the suit was the head of security at Adobe, Brad Arkin, can you believe it? I am not surprised now we have to use things like NoScript to make sure their products do not execute in our browsers. I have a pretty good guess what Adobe and SilverSky are doing: they are covering their asses. They do the minimum required to make sure they cannot be easily sued for negligence and deliberately exposing their customers. But they do not care about you and me, they do not give a damn if their software is full of holes. And they do not deserve to be called anything that has a word ‘security’ in it.

The stuff Brad Arkin is pushing at you flies into the face of the very security best practices we swear by:

“If you’re fixing every little bug, you’re wasting the time you could’ve used to mitigate whole classes of bugs,” he said. “Manual code review is a waste of time. If you think you’re going to make your product better by having a lot of eyeballs look at a lot of code, that’s the worst use of human labor.”

No, sir, you are bullshitting us. Your company does not want to spend the money on security. Your company is greedy. Your company wants to get money from the customers for the software full of bugs and holes. You know this and you are deliberately telling lies to deceive not only the customers but even people who know a thing or two about security. But we have seen this before already and no mistake.

 

The problem is, many small companies will believe anything that is pronounced at an event like the RSA Conference and take it for granted that this is the ultimate last word in security. And that will make the security state of things even worse. We will have more of those soft underbelly products and companies that practice “security by ignorance”. And we do not want that.

The effect of security bugs can be devastating. The normal human brain is not capable of properly estimating the risks of large magnitude but rare occurrence and tends to downplay them. That’s why the risk of large security problems that can bring a company to its knees is usually discarded. But security problems can be so severe that they will put even a large company out of business, not to mention that a small company would not survive any slightly more than average impact security problem at all.

So, thanks, but no, thanks. Security is a dangerous field, it is commonly compared to a battlefield and there is some truth in that. Stop being greedy and make sure your software does not blow up.

Ignoring security is not a good idea…

 

HTC One X @ MWC 2012I see that HTC got finally whacked over the head for the lack of security in their Android smartphones. I will have to contain myself here and will leave aside the inherent issues surrounding Android, its security and model of operation that will hurt … Ok, ok, I stop now. So, HTC got dragged into a court in US for improper implementation of software that allows remote attackers to steal various data from your smartphone. Big news. Problem is they settled and are not likely to actually do something about it. Anyway, that’s not interesting.

The interesting thing is that the regulators complained that HTC did not provide security training to the staff and did not perform adequate security testing:

The regulator said in a statement that HTC America “failed to provide its engineering staff with adequate security training, failed to review or test the software on its mobile devices for potential security vulnerabilities (and) failed to follow well-known and commonly accepted secure coding practices.”

Most companies ignore security hoping that the problem never comes. This shortsighted view is so widespread I feel like Captain Obvious by repeatedly talking about it. But I suppose it bears repeating. The security risks are usually discarded because they are of low probability. However, their impact is usually undervalued and the resulting risk analysis is not quite what it should be. The security problems prevalent in software are usually of such magnitude that they can easily cost even a large business dearly.

Ignoring security is not a good idea. This is like ignoring a possibility of human death by being trapped in an elevator for an elevator company. An elevator company will do all it can to prevent even a remote chance of this happening because if something like that happens they can be easily out of business in no time. Quite the same approach should be taken for granted by software companies, and the sooner, the better. A security problem can put a company out of business. Be forewarned.

The art of unexpected

Expecting the unexpected?I am reading through the report of a Google vulnerability on The Reg and laughing. This is a wonderful demonstration of what the attackers do to your systems – they apply the unexpected. The story is always the same. When we build a system, we try to foresee the ways in which it will be used. Then the attackers come and use it in a still unexpected way. So the whole security field can be seen as an attempt to expand the expected and narrow down the unexpected. Strangely, the unexpected is always getting bigger though. And, conversely, the art of being an attacker is an art of unexpected, shrugging off the conventions and expectations and moving into the unknown.

Our task then is to do the impossible, to protect the systems from unexpected and, largely, unexpectable. We do it in a variety of ways but we have to remember that we are on the losing side of the battle, always. There is simply no way to turn all unexpected into expected. So this is the zen of security: protect the unprotectable, expect the unexpectable and keep doing it in spite of the odds.

SAMATE Reference Dataset

Through the  news we can become alerted to many interesting things and one of the recent useful bits is the SAMATE Reference Dataset built by NIST Software Assurance Metrics And Tool Evaluation project. Should you need information on common vulnerabilities test cases, the database has more than 80,000 test cases by now.

From the project website:samate

The purpose of the SAMATE Reference Dataset (SRD) is to provide users, researchers, and software security assurance tool developers with a set of known security flaws. This will allow end users to evaluate tools and tool developers to test their methods. These test cases are designs, source code, binaries, etc., i.e. from all the phases of the software life cycle. The dataset includes “wild” (production), “synthetic” (written to test or generated), and “academic” (from students) test cases. This database will also contain real software application with known bugs and vulnerabilities. The dataset intends to encompass a wide variety of possible vulnerabilities, languages, platforms, and compilers. The dataset is anticipated to become a large-scale effort, gathering test cases from many contributors

Isn’t it good when you do not need to reinvent the wheel?

On the utility of technical security

It is often said that the system is only as strong as the weakest link. When you have good security and strong passwords, the weakest link will be the human. As has always been. Think of how the system can be recovered from a breach when the problem is not technical but human.

[youtube=http://youtu.be/W50L4UPfWsg]

Security by …

We know several common buzzwords for determining security strategy of a company (or an individual). Let’s try to define them once again, for completeness sake.

  • Security by ignorance
    Easily summed up by “what you do not know cannot hurt you” and is obviously wrong. Typically happens at the early stages of software developer’s career when a person is uneducated about security and simply does not know any better. Blissful ignorance usually ends rather abruptly with a cold shower of security education or a security breach.
  • Security by obscurity
    The typical position of most software companies, hiding the secrets somewhere they themselves would not find oblivious to the fact that thieves typically know very well where you stash your money and jewelry. This is an actively negative position asking for trouble that does not take too long to appear usually. In companies, this is often the end result of near-sightedness of management, worried only about their quarterly bonus.
  • SecuritySecurity by completeness
    The typical “very advanced security” position of many companies. This approach actually works quite well but only thanks to the fact that there are more companies in the above two categories. Completeness means the company extends the quality assurance by security relevant testing, design and code reviews, vulnerability testing and such things. In the end, one has to remember that correctness is not the same, and cannot be a guarantee of, security. When implemented correctly, can provide a really potent false feeling of security and serve as a shield against charges of incompetence and negligence.
  • Security by isolation
    An approach touted by many security and non-security relevant companies as the ultimate solution to security problems of today. The idea being that you run your application in an isolated environment and throw away the application together with the environment afterwards, or whenever you think you have a sight of a security problem. This way, security breaches are contained to a small disposable portion of software and do not cross over to the system at large. There are a few problems here, not the least one being the nurtured feeling of complacency and false security. Breaches can go in from the isolated environment to the system at large, the data is never completely thrown away, for why would you then compute that data in the first place, and so on. This is a  dead-end of false security.
  • Security by design
    This is a concept that is most unfamiliar to most of people. Typically, this is the case where the system is specifically designed to be secure. The environment is not taken for granted, malicious abuse is assumed, and care is taken to minimize the impact of the inevitable security breaches. Since this takes a lot of careful planning, thinking ahead, designing and verification, these products are always too late in the market and nearly never succeed. So we have no idea what it is like to use secure systems. Mainframes (that’s what “clouds” were called twenty years ago) were a bit like that, I feel…

So, what’s left then? Is there a practical approach to security that would not be so expensive that the companies would stick a finger to it but still provide a good solid security?

Hack NFC Door Locks

I can see in the logs that people sometimes come to this site with interesting searches. A recent interesting search was “Hack NFC Door Locks”. Well, since there is interest in the subject, why not? Let’s talk about NFC, contactless smart card and RFID door locks, shall we not?

Universal contactless smart card reader symbol

The actual technology used for the wireless door lock does not really matter all that much. Except, perhaps, when RFID is used because it only can support the simplest read-only schemes. But all in due time. What matters really is how the wireless technology is used. As it is usual in security, the technology could be used to create a fairly secure system or quite the same technology could be used to create a system that looks like a Swiss cheese to an attacker. The devil is in the details.

The simplest way to use any wireless technology to open a lock is to receive some kind of an identifier from the device, compare it to the stored database and take the security decision based on that. All of the RFID, contactless smart cards, and NFC send out a unique identifier when they connect to the reader. That identifier is used in many systems to actually take the security decision and, for example, open the door. The problem with this scheme is, of course, that there is no authentication at all. The identification phase is there but the authentication phase is absent. This is quite similar to asking a user name but not requiring a password.

Sounds silly but there are many systems that actually work like that. They rely on the assumption that copying a smart card is hard. Well, copying a smart card is reasonably hard compared to typing in a password although can be done fairly easily too but that is not the question. An attacker does not need to copy a smart card. The only thing that has to be done is to listen to the wireless communication with a good antenna from around the corner, record it and then play it back.

Another common alternative is to store a unique identifier into the smart card (or NFC device) and then request the card to send back that identifier. This makes for a longer conversation between the reader and the card compared to the case above and depending on the protocol the attacker may need to do more work. However, since the communication is not encrypted, it can be reverse-engineered easily and the attacker can still listen to the conversation, record all the required pieces of data and then communicate with the reader using his computer and an antennae.

Contactless smart cards and NFC devices have an option to use authenticated and encrypted communication. That is what typically used by the transport fare and banking systems. Those are hard to break. I spent countless hours analyzing and hardening those systems and I know that if it is implemented correctly an attacker will be as likely to break it as any other well implemented system using strong encryption – not very.

Of course, there still can be mistakes in the protocol, leaks that allow to guess some properties and secret material, special hardware attacks… but it is easier to attack the banking terminals with skimming devices than smart cards with hardware attacks. And door locks are more likely to implement the simplest of schemes than anything else…

The Elderwood Report

Symantec reports very interesting findings in their report of the so-called “Elderwood Project”. A highly interesting paper that I can recommend as bedside reading. Here is a teaser:

In 2009, Google was attacked by a group using the Hydraq (Aurora) Trojan horse. Symantec has monitored this group’s activities for the last three years as they have consistently targeted a number of industries. Interesting highlights in their method of operations include: the use of seemingly an unlimited number of zero-day exploits, attacks on supply chain manufacturers who service the target organization, and a shift to “watering hole” attacks (compromising certain websites likely to be visited by the target organization). The targeted industry sectors include, but are not restricted to; defense, various defense supply chain manufacturers, human rights and non-governmental organizations (NGOs), and IT service providers.

Random or not? That is the question!

Oftentimes, the first cryptography related question you come across while designing a system is the question of random numbers. We need some random numbers in many places when developing web applications: identifiers, tokens, passwords etc. all need to be somewhat unpredictable. The question is, how unpredictable should they be? In other words, what should be the quality of the random for those purposes?

It is very tempting (and is usually done this way) to make a single random number generator that is “sufficiently good” for everything and use it all over the place. That is a flawed approach. Yes, provided a true random number generator, this is feasible. But true random number generators are sparse and are usually unavailable in a web application. So you end up with a pseudo-random number generator of unknown quality that you re-use for multiple purposes. This last part matters because by observing random numbers used in one part of the system an attacker can deduce the random numbers used in another, totally hidden part of the system. What should one do?

Importantly, what is the value generated through a (pseudo-)random number generator going to be used for? Will the security of the whole system hang on this one value being totally unpredictable?

Generally, the system should not fall apart when a random value is suddenly not-so-random. The principles of layered security should prevent attacks even when they succeed to guess your random ids, tokens and so on.

Random number generator by xkcd

Case in point: at Citibank they relied on the account numbers being “hidden”. They were not random to begin with and there was no other security in place, so anyone knowing the account number structure could go and receive information of other accounts in the bank. That kind of thing should not happen. When you use random values for ids of the information in your database, that’s good. But the security must not rely on just the ids. There must be some controls that prevent someone from seeing what they should not even when the ids are known.

Another case on the opposite side is the credit card “chip and pin” system, created by the EMV. They decided for some reason that it would be okay to have a random that is not, well, so random. The manufacturers, of course, decided to take the advantage and not provide the real random number generators. The criminal organizations, of course, took notice and promptly implemented card copying targeting the weak ATM terminals. All of this is because of a weak random. Which should not have happened because they were supposed to know that their security depends on that random number and they should have taken care. This is the case when you really need a proper random number generator and I am surprised MasterCard CAST actually let it be. But enough of the scary stories.

Generally, most of the times when you think”I need random numbers” you usually don’t. What you need are reasonably unpredictable numbers of fairly low cryptographic quality. And this is so because your system design must not be dependent on the quality of those random numbers. If it is, it is a bad design decision. Typically, whenever you encounter such a dependency you must consider a redesign. There are very few exceptions like key generation and similar things which you should not do yourself in the application code anyway. For the rest, the system must be fully operational and disallow access properly even if your random numbers are all equal to five suddenly. The system may fail to operate but it must not become wide open because of a failure of the random number generator.

There are situations where you could say that the security of the whole system relies on the random properties of a value. For example, the password reset systems of most websites send a random token by e-mail to the account holder. When this token is entered into the password reset page field, the system allows changing the password. If this token can be guessed (or forced to a particular value), the system’s security is easily compromised – just request the password reset and go enter the value, you do not need to see the e-mail. In this case, yes, that value must be truly random or at the very least impossible to predict with a reasonable period of time (your tokens do limit the “reasonable time” with a validity period assigned, right? Right??). Interestingly, in this case the delivery channel does not matter at all. Even if you had the so-called “two-factor” authentication system where you get this code sent by a short message to your mobile, it won’t matter. If an attacker can guess the token – the rest of the system is of no consequence in this design.

So, a typical system should have at least two random number generators. One used for internal purposes and one used to generate tokens sent to users. They should be good, both of them, but the one for tokens should be cryptographically strong while the one for internal use may be just fairly unpredictable because your security would not rely solely on those numbers. The generators should be written by people with some knowledge of cryptography, publicly reviewed and tested.

And here is some random reading for more on the subject:

  1. Randomness attacks against PHP applications.
  2. Chip and pin ‘weakness’ exposed by Cambridge researchers.
  3. Citigroup hack exploited easy-to-detect web flaw.
  4. How to test a random number generator.
  5. NIST on random number generators.

Car software security

I stumbled across an article on car software viruses. I did not see anything unexpected really. The experts “hope” to get it all fixed before the word gets out and things start getting messy. Which tells us that things are in a pretty bad shape right now. The funny thing is though that the academic group that did the research into vehicle software security was disbanded after working for two years and publishing a couple of damning papers, demonstrating that “the virus can simultaneously shut off the car’s lights, lock its doors, kill the engine and release or slam on the brakes.” An interesting side note is that the car’s system is available to “remotely eavesdrop on conversations inside cars, a technique that could be of use to corporate and government spies.” This goes in stark contrast to what car manufactures are willing to disclose: “I won’t say it’s impossible to hack, but it’s pretty close,” said Toyota spokesman John Hanson. Basically, all you can hope for is that they are “working hard to develop specifications which will reduce that risk in the vehicle area.” I don’t know, mate, I think I better stay with the good old trustworthy mechanic stuff. I guess I know too much about software security for my own good. I kinda feel they will be inevitably hacked. Scared? If there is a manual override for everything – not so much but… The second-hand car market suddenly starts looking very appealing by comparison…

Posts navigation

1 2 3 4