Coverity reports on Open Source

Coverity is running a source code scan project started by U.S. Department of Homeland Security in 2006, a Net Security article reports. They published their report on quality defects recently pointing out some interesting facts.

Coverity is a lot into code quality but they also report security problems. On the other hand, any quality problem is easily a security problem under the right (or, rather, wrong) circumstances. So the report is interesting for its security implications.

The Open Source is notably better at handling quality than corporations. Apparently, the corporation can achieve the same level of quality as Open Source by going with Coverity tools. An interesting marketing twist, but, although the subject of Open Source superiority has been beaten to death, this deals the issue another blow.

Another interesting finding is that the corporations only get better at code quality after the size of the project goes beyond 1 million of lines of code. This is not so surprising and it is good to have some data backing up the idea that corporate coders are either not motivated or not professional to write good code without some formalization of the code production, testing and sign-off.

This is the necessary evil that hinders productivity at first but ensures an acceptable level of quality later.

Cloud security

Let’s talk a little about the very popular subject nowadays – the so-called ‘cloud security’. Let’s determine what it is, what we are talking about, in fact, and see what may be special about it.

Magnificent cumulonimbus clouds

‘Cloud’ – what is it? Basically, the mainframes have been doing ‘cloud’ all along, for decades now. Cloud is simply a remotely accessible computer with shared resources. Typically, what most people ‘get from the cloud’ is either a file server with a certain amount of redundancy and fault tolerance, a web service with some database resources attached, or a virtual machine (VM) to do with as they please. Yes, it is all that simple. Even the most hyped-up services, like Amazon, boil down to these things.

So when you determine the basics, you are then talking about three distinctly different types of operation: a file server, a web/database application and a VM. The three have different security models and can be attacked in completely different ways. So it does not really make sense to speak about ‘cloud security’ as such. It only makes sense to speak about the security of a particular service type. And all three of them have been studied in depth and have the defenses worked out in detail.

Mind you, there is also another type of ‘cloud security’ – the security of the data center itself, where the physical computers accessible through the physical network are. And the security of data centers is an important subject of interest to the operators of those data centers. At the same time, consumers of the services rarely are concerned with that type of security, assuming (sometimes without a good cause) that the data center took good care of its security.

So, from the point of view of the developer or consumer of services it makes sense to talk about three types of security in three different security models that are fairly well understood and were analyzed many times over the decades. Not using that experience of, first, the mainframe developers, and then, of the open systems developers, is at the very least irresponsible.

Getting revenue on security?

I am looking now into arguably the hardest problem of security: how to make it pay off. Security is usually seen as a risk management tool, where increasing security investment lowers the risk of costly disasters. But the trade off between security and risk is hard to evaluate and there is a bias for ignoring the rare risks.

We keep talking about costs, if you noticed. We lower costs, even not actual costs, but potential costs, and we do not increase the revenues here.

For example, when we talk about some product we can look at improvements that would get us more of the following to improve the bottom line:

  1. Acquisition – getting more users or clients
  2. Activation – getting the users or clients to make a purchase
  3. Activity – getting your users or clients to come back for more

Can security demonstrate similar improvements? To move from cost cutting to revenue generation? Share your opinion, please!

Exodus from Java

Finally the news that I was subconsciously waiting for: the exodus of companies from Java has started. It does not come as a surprise at all. Java has never fulfilled the promises it had at the beginning. It did not provide any of the portability, security and ease of programming. I am only surprised it took so long, although knowing full well that companies’ managers routinely optimize for their own career and bonuses that does not come as a shock either.

For those not in the know, the gist of the problem is that Java promised at all times to provide some sort of “inherent security”. That is, no matter how bad you write the code, it will still be more secure that the code written in C or other advanced high-level algorithmic languages. Java claimed absence of buffer overflows, null pinter dereferences and similar problems, which all turned out to be not true after all. And it had a very important consequence.

The consequence is that anyone writing in Java or learning it is subconsciously aware of that promise. So people tend to relax and allow themselves to be sloppy. So the code written in C ends up being tighter, more organized, and more secure than the code written in Java. And the developers in C tend to be on average better educated in the intricacies of software development and more aware of potential pitfalls. And that makes a huge difference.

So, the punch line is, if you want something done well, forget Java.

Apple – is it any different?

The article “Password denied: when will Apple get serious about security?” in The Verge talks about Apple’s insecurity and blames Apple’s badly organized security and the absence of any visible security strategy and effort. Moreover, it seems like Apple is not taking security sufficiently seriously even.

“The reality is that the Apple way values usability over all else, including security,” Echoworx’s Robby Gulri told Ars.

MoneyIt is good that Apple gets a bit of bashing but are they really all that different? If you look around and read about all other companies you quickly realize it is not just Apple, it is a common, too common, problem: most companies do not take security seriously. And they have a good reason: security investment cannot be justified in short-term, it cannot be sold, and it cannot be turned into bonuses and raises for the management. And the risks are typically ignored as I already talked about previously.

So in this respect Apple simply follows in the footsteps of all other software companies out there. They invest in features, in customer experience, in brand management but they ignore the security. Even the recent scandal with Mat Honan’s life wipe-out that got a lot of publicity did not change much if anything. The company did not suffer sufficient damage to start thinking of security seriously. The damage was done to a private individual and did not translate into any impact on sales. So it demonstrated once again that security problems do not damage the bottom line. Why else would a company care?

We need the damage done to external parties to be internalized and absorbed by the companies. As long as it stays external they will not care. The same thing exactly as the ecology – ecological cost is external to the company so it would not care unless there is regulation that makes those costs internal. We need a mechanism for internalizing the security costs.

Security training – does it help?

I came across the suggestion to train (nearly) everyone in the organization in security subjects. The idea is very good, we often have this problem that the management has absolutely no knowledge or interest in security and therefore ignores the subject despite the efforts of the security experts in the company. Developers, quality, documentation, product management – they all need to be aware of the seriousness of software security for the company and recognize that sometimes the security must be given priority.

But will it help? I have spent a lot of time educating developers and managers on security. My experience is that it does not help most people. Some people get interested and involved – those are naturally inclined to take good care of all aspects of their products, including but not limited to security. Most people do not care for real. They are not interested in security, they are there to do a job and get paid. And nobody gets paid for more security.

That results repeatedly in the situations like the one described in the article:

“If internal security teams seem overly draconian in an organization, the problem may not be with the security team, but rather a lack of security and risk awareness throughout the organization.”

Unfortunately, simply informative security training is not going to change that. People tend to ignore rare risks and that is what happens to security of product development. What we need is not a security awareness course but a way to “hook” people on security, a way to make them understand, deep inside, that the security is important, not in an abstract way, but personally, to them personally. Then security will work. How do we do that?

Brainwashing in security

At first, when I read the article titled Software Security Programs May Not Be Worth the Investment for Many Companies I thought it was a joke or a prank. But then I had a feeling it was not. And it was not the 1st of April. And it seems to be a record of events at the RSA Conference. Bloody hell, that guy, John Viega from SilverSky, “an authority on software security”, is speaking in earnest.

That’s one of those people who are continuously making the state of security as miserable as it is today. His propaganda is simple: do not invest into security, it is a total waste of money.

“For most companies it’s going to be far cheaper and serve their customers a lot better if they don’t do anything [about security bugs] until something happens. You’re better off waiting for the market to pressure on you to do it.”

And following the suit was the head of security at Adobe, Brad Arkin, can you believe it? I am not surprised now we have to use things like NoScript to make sure their products do not execute in our browsers. I have a pretty good guess what Adobe and SilverSky are doing: they are covering their asses. They do the minimum required to make sure they cannot be easily sued for negligence and deliberately exposing their customers. But they do not care about you and me, they do not give a damn if their software is full of holes. And they do not deserve to be called anything that has a word ‘security’ in it.

The stuff Brad Arkin is pushing at you flies into the face of the very security best practices we swear by:

“If you’re fixing every little bug, you’re wasting the time you could’ve used to mitigate whole classes of bugs,” he said. “Manual code review is a waste of time. If you think you’re going to make your product better by having a lot of eyeballs look at a lot of code, that’s the worst use of human labor.”

No, sir, you are bullshitting us. Your company does not want to spend the money on security. Your company is greedy. Your company wants to get money from the customers for the software full of bugs and holes. You know this and you are deliberately telling lies to deceive not only the customers but even people who know a thing or two about security. But we have seen this before already and no mistake.

 

The problem is, many small companies will believe anything that is pronounced at an event like the RSA Conference and take it for granted that this is the ultimate last word in security. And that will make the security state of things even worse. We will have more of those soft underbelly products and companies that practice “security by ignorance”. And we do not want that.

The effect of security bugs can be devastating. The normal human brain is not capable of properly estimating the risks of large magnitude but rare occurrence and tends to downplay them. That’s why the risk of large security problems that can bring a company to its knees is usually discarded. But security problems can be so severe that they will put even a large company out of business, not to mention that a small company would not survive any slightly more than average impact security problem at all.

So, thanks, but no, thanks. Security is a dangerous field, it is commonly compared to a battlefield and there is some truth in that. Stop being greedy and make sure your software does not blow up.

Ignoring security is not a good idea…

 

HTC One X @ MWC 2012I see that HTC got finally whacked over the head for the lack of security in their Android smartphones. I will have to contain myself here and will leave aside the inherent issues surrounding Android, its security and model of operation that will hurt … Ok, ok, I stop now. So, HTC got dragged into a court in US for improper implementation of software that allows remote attackers to steal various data from your smartphone. Big news. Problem is they settled and are not likely to actually do something about it. Anyway, that’s not interesting.

The interesting thing is that the regulators complained that HTC did not provide security training to the staff and did not perform adequate security testing:

The regulator said in a statement that HTC America “failed to provide its engineering staff with adequate security training, failed to review or test the software on its mobile devices for potential security vulnerabilities (and) failed to follow well-known and commonly accepted secure coding practices.”

Most companies ignore security hoping that the problem never comes. This shortsighted view is so widespread I feel like Captain Obvious by repeatedly talking about it. But I suppose it bears repeating. The security risks are usually discarded because they are of low probability. However, their impact is usually undervalued and the resulting risk analysis is not quite what it should be. The security problems prevalent in software are usually of such magnitude that they can easily cost even a large business dearly.

Ignoring security is not a good idea. This is like ignoring a possibility of human death by being trapped in an elevator for an elevator company. An elevator company will do all it can to prevent even a remote chance of this happening because if something like that happens they can be easily out of business in no time. Quite the same approach should be taken for granted by software companies, and the sooner, the better. A security problem can put a company out of business. Be forewarned.

The art of unexpected

Expecting the unexpected?I am reading through the report of a Google vulnerability on The Reg and laughing. This is a wonderful demonstration of what the attackers do to your systems – they apply the unexpected. The story is always the same. When we build a system, we try to foresee the ways in which it will be used. Then the attackers come and use it in a still unexpected way. So the whole security field can be seen as an attempt to expand the expected and narrow down the unexpected. Strangely, the unexpected is always getting bigger though. And, conversely, the art of being an attacker is an art of unexpected, shrugging off the conventions and expectations and moving into the unknown.

Our task then is to do the impossible, to protect the systems from unexpected and, largely, unexpectable. We do it in a variety of ways but we have to remember that we are on the losing side of the battle, always. There is simply no way to turn all unexpected into expected. So this is the zen of security: protect the unprotectable, expect the unexpectable and keep doing it in spite of the odds.

SAMATE Reference Dataset

Through the  news we can become alerted to many interesting things and one of the recent useful bits is the SAMATE Reference Dataset built by NIST Software Assurance Metrics And Tool Evaluation project. Should you need information on common vulnerabilities test cases, the database has more than 80,000 test cases by now.

From the project website:samate

The purpose of the SAMATE Reference Dataset (SRD) is to provide users, researchers, and software security assurance tool developers with a set of known security flaws. This will allow end users to evaluate tools and tool developers to test their methods. These test cases are designs, source code, binaries, etc., i.e. from all the phases of the software life cycle. The dataset includes “wild” (production), “synthetic” (written to test or generated), and “academic” (from students) test cases. This database will also contain real software application with known bugs and vulnerabilities. The dataset intends to encompass a wide variety of possible vulnerabilities, languages, platforms, and compilers. The dataset is anticipated to become a large-scale effort, gathering test cases from many contributors

Isn’t it good when you do not need to reinvent the wheel?

Posts navigation

1 2 3 4 5 6 7 8 9