Worst languages for software security

I was sent an article about program languages that generate most security bugs in software today. The article seemed to refer to a report by Veracode, a company I know well, to discuss what software security problems are out there in applications written in different languages. That is an excellent question and a very interesting subject for a discussion. Except that the article really failed to discuss anything, making instead misleading and incoherent statements about both old-school lnguages like C/C++ and the PHP scripting. I fear we will have to look into this problem ourselves then instead.

So, what languages are the worst when it comes to software security? Are they the old C and C++, like so many proponents of Java would like us to believe? Or are they the new and quickly developing languages with little enforcement of structure, like PHP? Let’s go to Veracode and get their report: “State of Software Security. Focus on Application Development. Supplement to Volume 6.

The report includes a very informative diagram showing what percentage of applications passes the OWASP policy for a secure application out of the box grouped by the language of the application. OWASP policy is defined as “not containing any of the security problems mentioned on the OWASP Top 10 most important vulnerabilities for web application” and OWASP is the accepted industry authority on web application security. So if they say that something is a serious vulnerability, you can be sure it is. Let’s look at the diagram:

Veracode OWASP by language 2016-01-18-01

Fully 60% of applications written in C/C++ come without those most severe software security vulnerabilities listed by OWASP. That is a very good result and a notable achievement. Next, down one and a half to two times, come the three mobile platforms. And the next actual programming language, .NET, comes out more than two times as bad! Java is 2 and a half times as bad as C/C++. The scripting languages are three times as bad.

Think about it. Applications written in Java are almost three times as likely to contain security vulnerabilities as those written in C/C++. And C/C++ is the only language that gives you a more than 50% chance of not having serious security vulnerabilities in your application.

Why is that?

The reasons are many. For one thing, Java has never delivered on its promises of security, stability and uniformity. People must struggle with issues that have been long resolved in other languages, like the idiotic memory management and garbage collection, reinventing the wheel on any more or less non-trivial piece of software. The language claims to be “easy” and “fool-proof” while letting people to compare string objects instead of strings with an equal operator unknowingly. The discrepancy between the fantasy and reality is huge in the Java world and getting worse all the time.

Still, the main reason, I think, is the quality of the developer: both the level of developer knowledge, expertise, as it were, and the sheer carelessness of the Java programmers. Where C/C++ developers are actually masters of the software development, the Java developers are most of the time just coders. That makes a difference. People learn Java in all sorts of courses or by themselves – companies constantly hire Java developers, so it makes sense to follow the market demand. Except that those people are kids with an ad-hoc knowledge of a programming language and absolutely no concept of software engineering. As opposed to that, most C/C++ people are actually engineers and they know much better what they are doing, even when they write things in a different language. But the “coders” are much cheaper than real engineers, so the companies developing in Java end up with lots of those and the software quality goes down the drain.

The difference in the quality of the software is easily apparent when you compare the diagrams for types of the issues detected mostly from the same report:

Veracode Problem Areas 2016-01-18

You can see that code quality problems are only 27% of the total number of issues detected in the case of C/C++ while for Java code the code quality issues represent the whopping 80% of total.

Think again. The code written in Java has several time worse quality than the code written in C/C++.

It is not surprising that the quality problems result in security vulnerabilities. Both quality and security go hand in hand and require discipline and knowledge on the part of developer. Where one suffers, the other inevitably does as well.

The conclusion: if you want secure software, you want C/C++. You definitely do not want Java. And even if you are stuck with Java, you still want to have C/C++ developers to write your Java code because they are more likely to write better and more secure software.

The human factor: philosophy and engineering

The ancient Greeks had a concept of “aretê” (/ˈærətiː/) that is usually loosely translated to English as “quality”, “excellence”, or “virtue”. It was all that and more: the term meant the ultimate and harmonious fulfillment of task, purpose, function, or even the whole life. Living up to this concept was the highest achievement one could attain in life. Unfortunately, it does not translate well into English where the necessary concept is absent.

To give an example of arete, one could consider a work of art, like a painting, or a book. We could argue infinitely about a work of art and its many aspects but the majority of people do not have any problem identifying whether a work of art is actually a masterpiece or a stupid piece of garbage. That “whatsit” that we identify in a masterpiece is the arete, the harmony of total and complete excellence of the author pouring his virtue into his work.

Raphael-Plato-and-Aristotle

Unfortunately, the science of the today’s world is not built on the same principles and, in fact, is divorcing from it further and further. It all started probably with the Aristotle and his move away from the “essence of things” being the source of everything else. Aristotle taught us, essentially, that we can understand things by splitting and analyzing them, that the “essence” of a thing could be understood from a thing itself, that we could achieve anything through the “divide and conquer” principle. That is where our scientific methods originate. Aristotle, among other philosophers, gave us the logic and foundation for all other sciences.

The scientific methods of divide and conquer are great, that’s what built this civilization, effectively, but they have a downside. They are all fine when there is a counterbalance to them but when they are taken as the only possible view of the world, when they are taken as the ultimate philosophy, they are taken to the extremes and start causing problems. It is surprising that such a far-fetched connection should have to be made, from ancient philosophy to contemporary engineering and security, but our engineering is based on how we think, so our products necessarily reflect our (apparent or obscured) philosophy.

What is the problem with the philosophy direction that started with Aristotle that we all effectively follow ever since? The problem is that it lives no place and does not require the arete, the harmonious excellence. The philosophy of today makes our thinking, and not only scientific thinking, compartmentalized. We are now used to thinking about things as completely separate from each other. We are used to dividing the world up into small chunks and operating on those small chunks one at a time, arguing that thus we are making the whole thing more manageable. We treat the world as a puzzle, investigating and tweaking one piece at a time. But we forget about the relation of the chunk we are working on to the grand scheme of things. We forget about the influences and dependencies, both in space and time. We forget that the puzzle must click together in the end.

Screenshot from 2015-04-22 10:55:04For example, when you get the quality management explained to you, you usually receive the overview of the famous “5 views of quality”. The “transcendental view” is formulated as the inherent quality obvious to the observer – “I know it when I see it”; “product-based view” provides for designing a product against benchmarks for speed, mean failure rate etc.; “user-based view” calls for satisfying the consumer preferences; “manufacturing-based view” requires conformity to user requirements; and the “value-based view” calls for design based on cost-benefit analysis. Out of all these things, the only thing that customer really sees and really cares about is the first one – the arete, the “transcendental quality”. Guess which one is completely ignored in quality management? The very same one, for a very simple reason that it is not easily broken up into pieces to be measured and “improved” on their own. And the same problem permeats all of our engineering, and especially security.

Systemic approach

That means we tend to ignore one of the cornerstones of security: systemic approach. We often come across this myth declared even from security conference stages that secure components will mean a secure system. This is the assumption that drives many a creation of systems that are claimed for this reason to be secure. Well, what a surprise: it won’t and they aren’t. This problem is well-known in the security field, especially where seriously high levels of security are involved, like smart card business. When you take two secure components and combine them, you cannot make any statement about the security of the whole based just on the security of each part. You must consider the whole thing before you can make any statements regarding the system.

Secure components are never secure unconditionally. They are secure conditionally. They are secure as long as a certain set of assumptions holds true. Once an assumption is invalid, the component is no longer secure. When we combine two secure components we create a problem of composition, where the components potentially interact in unforeseen ways. They may be secure still, but may be not. This is the case where a systemic, holistic view of the system must definitely take the upper hand.

Short time horizons

Another problem is the extreme shortening of the time horizons. Did you notice how everyone is interested only in the immediate results, immediate profits, things we can deliver, sell, buy, have, wear, eat, and drink today? It is noticeable everywhere in the society but in the software industry it has become the defining aspect of life.

When whatever we are building exists in isolation, when we need not consider the effects on the industry, technology, society… we do not need to worry about long-term results of our work. We did this “thing” and we got our bonus, that’s the end of the story. But is it?

I am sure we all came across problems that arise from someone not doing a proper job when they did not think it was worth the trouble because it was all done and gone. Yes, for whoever did it, it was done and gone, but for us, the people coming after, don’t we wish that someone was more careful, more precise, more thoughtful? Don’t we wish he had spent just a little more time making sure everything works not just, but properly?

I have a friend that works in a restaurant chain in St-Petersburg. That’s a pretty large chain and there are many things to do, of course. One thing that we talked about once was the Food Safety and Health inspection. I was surprised, frankly, at how much effort goes into the compliance with those rules. They actually do follow all of the guidance for Food Safety and perform thorough audits and certification. When I asked her why they bother with this, my friend told me that they have two very serious reasons to do so and both of them are long-term overall business risk problems. One, if someone should get a food poisoning, they would otherwise have no certifications and audit results to fall back on and they would have a hard time in court proving that they actually did follow due diligence in all matters. Two, they would lose a lot of clientele if something like this would ever happen and for an established industry with a lot of competition that could as well mean going out of business.

So, you could call that risk management, due diligence, or simply good understanding that business is not just about getting the products as cheaply as possible out of the door in the long term, the understanding that there is more to making good business than momentarily advantages. My friend has a holistic view of the business that encompasses everything that’s important for the business and that makes her and her business successful.

They could, like so many companies in our software field, take a short term view and save some money, get something quick and dirty done, but they have an understanding that this is not a sound business strategy long-term. In our field, the security is getting worse and worse and somehow we still think it is okay to think entirely in the short term, to the next release, to the next milestone. What we need is a proper long term consideration of all aspects of the products we develop and deliver for things to start changing to the better. The holistic approach to the software development may slow things down but it will bring the risk of the future collapses down for all of us.

Security prevents innovation

Another aspect of the same “faster and fancier now!” game that we encounter regularly is the “Security should not prevent innovation!” slogan. Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms for performance that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

iot-construction-c13-3

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Uncertainty principle

An interesting concept has surfaced not so long ago as an excuse for not doing anything, called the “uncertainty principle of new technology”…

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances, namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and this academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

engineering-collage

NASA Apollo Mission

Think of the Apollo mission that brought people to the Moon. Would you not say that that was a great achievement not only for the engineers but the whole world? The Apollo mission was a project that encompassed many different areas, from metallurgy to psychology, to make space travel possible.

357863main_apollo-insigniaApollo ships also had software. The software was complex and had lots of parts. The spaceship contains a lot of sensors, equipment and machinery that are controlled by software. There is command and data handling, telecommunications, electrical power systems control, propulsion control, guidance and navigation systems, spacecraft integrity control, thermal control and so on. The spaceship is an incredibly complex system that operates under a wide variety of hard and extreme conditions. There is the vibration stress and accelerations, radiation and cosmic rays, meteoroids and extreme temperatures. And do not forget that the system also must be fool-proof. As one of the people working on the Apollo put it, “there is always some fool that switches the contacts polarity.”

And this complex system that had to operate under a tremendous stress actually worked. Apollo did not only go to the Moon but returned safely back to Earth. Is this not an example of great engineering? Is this not an example of a great achievement of humankind?

The software for the mission was developed by the engineers of MIT under the project management of NASA and using the software development process experts from IBM. However, the success of the software development for the Apollo mission could not be attributed to the software process guidance form the IBM or to the project management of NASA. They all failed miserably. They tried and divided the system up in components and developed the software to the best standards… and it did not work. MIT were lucky in a sense that the start was delayed due to hardware problems, otherwise NASA would have to cancel it for the software problems.

it gets difficult to assign out little task groups to program part of the computer; you have to do it with a very technical team that understands all the interactions on all these things.
— D. G. Hoag interview, MIT, Cambridge, MA, by Ivan Ertel, April 29, 1966

The software was only developed because the MIT engineers got together and did it as a single system.

In the end NASA and MIT produced quality software, primarily because of the small-group nature of development at MIT and the overall dedication shown by nearly everyone associated with the Apollo program.
— Frank Hughes interview, Johnson Space Center, Houston, TX, June 2, 1983

The software for the Apollo program was failing as long as they tried to isolate the systems and components from each other. Once the engineers used the “small group”, that is they got together and worked on it as the whole system with close dependencies and full understanding, they were successful. It is not so much that they refused the oversight and process expertise but that they took a systemic, holistic view of the whole thing and they all understood what they are doing and why. Some of the corners they had to cut caused malfunctions in the flight but the pilots were prepared for those, they knew those could happen and those faults did not abort the mission.

Software is deadly

As the society progresses, it writes more and more software and creates more and more automation. We are already surrounded by software, by devices running software of one kind or another, at all times. Somehow, we still think it is all right not to care what kind of software we put out. I think it is time to understand that everything we make will end up impacting us directly in our lives. Everything is controlled by software: TV, airplanes, cars, factories, power plants. Consequences of errors will be felt by everyone, by all of us, in many cases literary on our own skin. Current methods of software development cause mass malfunction.

We screw up – people die. Some examples:

  1. Therac-25: A state of the art linear accelerator for radiation treatment. The equipment delivered lethal doses of radiation to three people due to a setup race condition in 1985.
  2. Ariane 5 rocket destroyed in 1996: Conversion of velocity in the guidance unit from 64 bit to 16 bit overflowed. Destroyed 4 scientific satellites, cost: $500 million.
  3. Nuclear holocaust was avoided twice at the last moment because a human operator intervened, verified the automated results as false positive and prevented a strike back. The dates are: June 1980, NORAD Nuclear missile false alarm and September 1983, Soviet Nuclear missile false alarm.
  4. March 2014: Nissan recalls 990,000 cars because a software problem in the occupant classification system might not detect an occupant in the passenger seat and prevent airbag deployment.
  5. July 2014: Honda has conceded that a software glitch in electronic control units could cause cars to accelerate suddenly, forcing drivers to scramble to take emergency measures to prevent an accident. Honda Motor Co., citing software problems, announced that it is recalling 175,000 hybrid vehicles.
  6. April 2015: U.S. GAO publishes the “Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions toNextGen” report, stating that the flight control computers on board of contemporary aircraft could be susceptible for break-in and take over by using the on-board WiFi network or even from the ground.

Complex relationships in the real world – everything depends on everything – make the situation more and more dangerous. The questions of technology and ethics cannot be separated; developers must feel responsibility for what they create.

Specialization or Mastership

There is a tale that six blind men were asked to determine what an elephant looked like by feeling different parts of the elephant’s body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

Blind_monks_examining_an_elephant-900x652

All of them are right. The reason every one of them is telling it differently is because each one touched a different part of the elephant.

Our narrow-specialized view of the world is very similar to that of the blind men feeling an elephant. I see the security part of the elephant, developers see the functionality part of the elephant and none of us see the whole of the elephant. As a result, the improvement is judged on a fragment of the system, not the system as a whole. We think that if we make a larger front right leg, we will get a better elephant. Or, maybe, that’s a longer trunk that’s important. In reality, we get an ugly and malfunctioning elephant. We get a failure. Develop a function, take no account of security – get a failure. Develop a security feature, take no account of usability – get a failure. Since nobody has the holistic view, any approach to making the elephant bigger on one or another side fails. We fail on all fronts.

The absence of holistic approach that unites all of the aspects of the system and sees it in action results in complete and unavoidable failure.

This failure causes both direct financial and indirect, through the waste of resources, losses. We need to slow down and get an overview of what we are doing, each of us. We need to get some understanding of the whole. We need to figure out how everything works together. This is not a problem of documentation, or communication, or proper processes. This is the deeper problem of understanding our creations and thinking about the world. We need to be able to put it all together in our heads to be able to work out how to make the elephant better.

The agile methods proponents did a step in the right direction by saying that specialization is not necessary or even harmful for software development. Unfortunately, they did twenty steps away by saying that developers only need to understand a small chunk of code they are working on and nothing else.

Security is integral to development

So if you look at the current software development, you will notice that security is always built around the product like a fence, after the fact. First you get the function and then you get a fence of security around it. And the situation is exactly the same with other things like quality and usability. As a result, you get something like a bit of more or less coherent code with a bunch of fences around it. You are lucky if the fences end up being concentric too. The developers of different aspects of the system tend to have completely different ideas about what the system does and its intended environment.

That is not a good way of dealing with product development. We need product development that unites all aspects of the product and includes its interaction with the world. We need developers that understand the product’s function and can deal with the multitude of aspects of the product and its lifecycle. Developers and managers must understand that security is an integral part of the product and deal with it responsibly.

I notice that when I talk about the security program I created at Software AG, I invariably get the same reaction: what our security team is doing is very advanced and simply amazing. Well, for me it is not. The difference is that most companies go after security piecemeal and after the fact, while we applied the holistic approach and we introduce security to all areas of product development. Other companies simply perform some penetration testing, fix the bugs and leave it at that. We go after the development process, company policies, developer training and so on, taking the view that everything we do contributes to security or insecurity of our products. That creates a very impressive feeling of quality to what we do even though it is perfectly normal to do and expect.

Let’s start small. I want you to look back at the ancient Greek philosophy and understand the meaning of taking a holistic approach to everything you do. We need that principle, we need the holistic approach in other areas of our lives too but we need it badly now in software engineering. We need the excellence, the harmony, and the overview. Next time you do your job try considering and following a more holistic, systemic approach.

The holistic approach will allow you to make sure that whatever you do is actually correct, secure, of high quality, works as expected and is simply right for the customer. It will allow you to control the changes and innovation, external influences and impulses, while understanding what should be used and what – ignored. The holistic approach will also mean that you deliver long-term value and improvements, making finally that better elephant that the customers have been waiting for.

 

About the so-called “uncertainty principle of new technology”

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances. Namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and the academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead. Call it “Zenkoff Principle”.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

st-petersburg-open-bridge

P.S. Perhaps I should have written “quality” with a capital “Q” all over because it is not in the sense of “quality assurance” that I use the term but the inherent quality of everything called “arete” by Greeks that originates both form and substance of the new inventions.

Coverity reports on Open Source

Coverity is running a source code scan project started by U.S. Department of Homeland Security in 2006, a Net Security article reports. They published their report on quality defects recently pointing out some interesting facts.

Coverity is a lot into code quality but they also report security problems. On the other hand, any quality problem is easily a security problem under the right (or, rather, wrong) circumstances. So the report is interesting for its security implications.

The Open Source is notably better at handling quality than corporations. Apparently, the corporation can achieve the same level of quality as Open Source by going with Coverity tools. An interesting marketing twist, but, although the subject of Open Source superiority has been beaten to death, this deals the issue another blow.

Another interesting finding is that the corporations only get better at code quality after the size of the project goes beyond 1 million of lines of code. This is not so surprising and it is good to have some data backing up the idea that corporate coders are either not motivated or not professional to write good code without some formalization of the code production, testing and sign-off.

This is the necessary evil that hinders productivity at first but ensures an acceptable level of quality later.

Quantitative analysis of faults shows that…

Not to worry, we are not going to get overly scientific here. I happened across this extremely interesting paper called “Quantitative analysis of faults and failures in a complex software system” published by Norman Fenton and Niclas Ohlsson in ye god old year 2000. The paper is very much worth a read, so if you have the patience I recommend you read it and make your own conclusions. For the impatient I present my own conclusions that I draw from reading the paper.

The gentlemen have done a pretty interesting piece of research that coincides well with my own observations of software development in various companies and countries. They worked with a large software base of a large company to investigate a couple of pretty simple theorems that most people take for granted. The research is about general software faults but the security faults are also software faults so this is all relevant anyway.

First, their object of investigation concerned the relationship between the number of faults in the modules of the software system and the size of the modules. It turns out that the software faults are concentrated in a few modules and not scattered uniformly throughout the system as one may have expected. That coincides very well with the idea that the developers are of different quality and experience and the modules written by different people will feature different levels of code quality.

Then, the finding that confirms my experience but contradicts what I hear quite often from managers and coders alike at all levels: the complexity of the code does not have any relation to the number of faults in that module. The more complex (and larger) code does not automatically beget more faults. It is again down to the people who wrote the code whether the code is going to be higher or lower in quality.

And then we come to a very interesting investigation. Apparently, there is strong evidence that (a) software written in similar environments will have similar quality and (b) the software quality does not improve with the time. You see, the developers do not become better at it. If they sucked at the beginning, they still suck ten years later. If they were brilliant to start with, you will get great code from day one. I am exaggerating but basically that is how it works. Great stuff, right?

So, the summary of the story is that if you want to have good code – get good developers. There is simply no other way. Good developers will handle high complexity and keep the good work, bad (and cheap) developers will not and will not learn. And no amount of tools will rectify that. End of the story.