Worst languages for software security

I was sent an article about program languages that generate most security bugs in software today. The article seemed to refer to a report by Veracode, a company I know well, to discuss what software security problems are out there in applications written in different languages. That is an excellent question and a very interesting subject for a discussion. Except that the article really failed to discuss anything, making instead misleading and incoherent statements about both old-school lnguages like C/C++ and the PHP scripting. I fear we will have to look into this problem ourselves then instead.

So, what languages are the worst when it comes to software security? Are they the old C and C++, like so many proponents of Java would like us to believe? Or are they the new and quickly developing languages with little enforcement of structure, like PHP? Let’s go to Veracode and get their report: “State of Software Security. Focus on Application Development. Supplement to Volume 6.

The report includes a very informative diagram showing what percentage of applications passes the OWASP policy for a secure application out of the box grouped by the language of the application. OWASP policy is defined as “not containing any of the security problems mentioned on the OWASP Top 10 most important vulnerabilities for web application” and OWASP is the accepted industry authority on web application security. So if they say that something is a serious vulnerability, you can be sure it is. Let’s look at the diagram:

Veracode OWASP by language 2016-01-18-01

Fully 60% of applications written in C/C++ come without those most severe software security vulnerabilities listed by OWASP. That is a very good result and a notable achievement. Next, down one and a half to two times, come the three mobile platforms. And the next actual programming language, .NET, comes out more than two times as bad! Java is 2 and a half times as bad as C/C++. The scripting languages are three times as bad.

Think about it. Applications written in Java are almost three times as likely to contain security vulnerabilities as those written in C/C++. And C/C++ is the only language that gives you a more than 50% chance of not having serious security vulnerabilities in your application.

Why is that?

The reasons are many. For one thing, Java has never delivered on its promises of security, stability and uniformity. People must struggle with issues that have been long resolved in other languages, like the idiotic memory management and garbage collection, reinventing the wheel on any more or less non-trivial piece of software. The language claims to be “easy” and “fool-proof” while letting people to compare string objects instead of strings with an equal operator unknowingly. The discrepancy between the fantasy and reality is huge in the Java world and getting worse all the time.

Still, the main reason, I think, is the quality of the developer: both the level of developer knowledge, expertise, as it were, and the sheer carelessness of the Java programmers. Where C/C++ developers are actually masters of the software development, the Java developers are most of the time just coders. That makes a difference. People learn Java in all sorts of courses or by themselves – companies constantly hire Java developers, so it makes sense to follow the market demand. Except that those people are kids with an ad-hoc knowledge of a programming language and absolutely no concept of software engineering. As opposed to that, most C/C++ people are actually engineers and they know much better what they are doing, even when they write things in a different language. But the “coders” are much cheaper than real engineers, so the companies developing in Java end up with lots of those and the software quality goes down the drain.

The difference in the quality of the software is easily apparent when you compare the diagrams for types of the issues detected mostly from the same report:

Veracode Problem Areas 2016-01-18

You can see that code quality problems are only 27% of the total number of issues detected in the case of C/C++ while for Java code the code quality issues represent the whopping 80% of total.

Think again. The code written in Java has several time worse quality than the code written in C/C++.

It is not surprising that the quality problems result in security vulnerabilities. Both quality and security go hand in hand and require discipline and knowledge on the part of developer. Where one suffers, the other inevitably does as well.

The conclusion: if you want secure software, you want C/C++. You definitely do not want Java. And even if you are stuck with Java, you still want to have C/C++ developers to write your Java code because they are more likely to write better and more secure software.

The human factor: philosophy and engineering

The ancient Greeks had a concept of “aretê” (/ˈærətiː/) that is usually loosely translated to English as “quality”, “excellence”, or “virtue”. It was all that and more: the term meant the ultimate and harmonious fulfillment of task, purpose, function, or even the whole life. Living up to this concept was the highest achievement one could attain in life. Unfortunately, it does not translate well into English where the necessary concept is absent.

To give an example of arete, one could consider a work of art, like a painting, or a book. We could argue infinitely about a work of art and its many aspects but the majority of people do not have any problem identifying whether a work of art is actually a masterpiece or a stupid piece of garbage. That “whatsit” that we identify in a masterpiece is the arete, the harmony of total and complete excellence of the author pouring his virtue into his work.

Raphael-Plato-and-Aristotle

Unfortunately, the science of the today’s world is not built on the same principles and, in fact, is divorcing from it further and further. It all started probably with the Aristotle and his move away from the “essence of things” being the source of everything else. Aristotle taught us, essentially, that we can understand things by splitting and analyzing them, that the “essence” of a thing could be understood from a thing itself, that we could achieve anything through the “divide and conquer” principle. That is where our scientific methods originate. Aristotle, among other philosophers, gave us the logic and foundation for all other sciences.

The scientific methods of divide and conquer are great, that’s what built this civilization, effectively, but they have a downside. They are all fine when there is a counterbalance to them but when they are taken as the only possible view of the world, when they are taken as the ultimate philosophy, they are taken to the extremes and start causing problems. It is surprising that such a far-fetched connection should have to be made, from ancient philosophy to contemporary engineering and security, but our engineering is based on how we think, so our products necessarily reflect our (apparent or obscured) philosophy.

What is the problem with the philosophy direction that started with Aristotle that we all effectively follow ever since? The problem is that it lives no place and does not require the arete, the harmonious excellence. The philosophy of today makes our thinking, and not only scientific thinking, compartmentalized. We are now used to thinking about things as completely separate from each other. We are used to dividing the world up into small chunks and operating on those small chunks one at a time, arguing that thus we are making the whole thing more manageable. We treat the world as a puzzle, investigating and tweaking one piece at a time. But we forget about the relation of the chunk we are working on to the grand scheme of things. We forget about the influences and dependencies, both in space and time. We forget that the puzzle must click together in the end.

Screenshot from 2015-04-22 10:55:04For example, when you get the quality management explained to you, you usually receive the overview of the famous “5 views of quality”. The “transcendental view” is formulated as the inherent quality obvious to the observer – “I know it when I see it”; “product-based view” provides for designing a product against benchmarks for speed, mean failure rate etc.; “user-based view” calls for satisfying the consumer preferences; “manufacturing-based view” requires conformity to user requirements; and the “value-based view” calls for design based on cost-benefit analysis. Out of all these things, the only thing that customer really sees and really cares about is the first one – the arete, the “transcendental quality”. Guess which one is completely ignored in quality management? The very same one, for a very simple reason that it is not easily broken up into pieces to be measured and “improved” on their own. And the same problem permeats all of our engineering, and especially security.

Systemic approach

That means we tend to ignore one of the cornerstones of security: systemic approach. We often come across this myth declared even from security conference stages that secure components will mean a secure system. This is the assumption that drives many a creation of systems that are claimed for this reason to be secure. Well, what a surprise: it won’t and they aren’t. This problem is well-known in the security field, especially where seriously high levels of security are involved, like smart card business. When you take two secure components and combine them, you cannot make any statement about the security of the whole based just on the security of each part. You must consider the whole thing before you can make any statements regarding the system.

Secure components are never secure unconditionally. They are secure conditionally. They are secure as long as a certain set of assumptions holds true. Once an assumption is invalid, the component is no longer secure. When we combine two secure components we create a problem of composition, where the components potentially interact in unforeseen ways. They may be secure still, but may be not. This is the case where a systemic, holistic view of the system must definitely take the upper hand.

Short time horizons

Another problem is the extreme shortening of the time horizons. Did you notice how everyone is interested only in the immediate results, immediate profits, things we can deliver, sell, buy, have, wear, eat, and drink today? It is noticeable everywhere in the society but in the software industry it has become the defining aspect of life.

When whatever we are building exists in isolation, when we need not consider the effects on the industry, technology, society… we do not need to worry about long-term results of our work. We did this “thing” and we got our bonus, that’s the end of the story. But is it?

I am sure we all came across problems that arise from someone not doing a proper job when they did not think it was worth the trouble because it was all done and gone. Yes, for whoever did it, it was done and gone, but for us, the people coming after, don’t we wish that someone was more careful, more precise, more thoughtful? Don’t we wish he had spent just a little more time making sure everything works not just, but properly?

I have a friend that works in a restaurant chain in St-Petersburg. That’s a pretty large chain and there are many things to do, of course. One thing that we talked about once was the Food Safety and Health inspection. I was surprised, frankly, at how much effort goes into the compliance with those rules. They actually do follow all of the guidance for Food Safety and perform thorough audits and certification. When I asked her why they bother with this, my friend told me that they have two very serious reasons to do so and both of them are long-term overall business risk problems. One, if someone should get a food poisoning, they would otherwise have no certifications and audit results to fall back on and they would have a hard time in court proving that they actually did follow due diligence in all matters. Two, they would lose a lot of clientele if something like this would ever happen and for an established industry with a lot of competition that could as well mean going out of business.

So, you could call that risk management, due diligence, or simply good understanding that business is not just about getting the products as cheaply as possible out of the door in the long term, the understanding that there is more to making good business than momentarily advantages. My friend has a holistic view of the business that encompasses everything that’s important for the business and that makes her and her business successful.

They could, like so many companies in our software field, take a short term view and save some money, get something quick and dirty done, but they have an understanding that this is not a sound business strategy long-term. In our field, the security is getting worse and worse and somehow we still think it is okay to think entirely in the short term, to the next release, to the next milestone. What we need is a proper long term consideration of all aspects of the products we develop and deliver for things to start changing to the better. The holistic approach to the software development may slow things down but it will bring the risk of the future collapses down for all of us.

Security prevents innovation

Another aspect of the same “faster and fancier now!” game that we encounter regularly is the “Security should not prevent innovation!” slogan. Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms for performance that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

iot-construction-c13-3

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Uncertainty principle

An interesting concept has surfaced not so long ago as an excuse for not doing anything, called the “uncertainty principle of new technology”…

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances, namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and this academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

engineering-collage

NASA Apollo Mission

Think of the Apollo mission that brought people to the Moon. Would you not say that that was a great achievement not only for the engineers but the whole world? The Apollo mission was a project that encompassed many different areas, from metallurgy to psychology, to make space travel possible.

357863main_apollo-insigniaApollo ships also had software. The software was complex and had lots of parts. The spaceship contains a lot of sensors, equipment and machinery that are controlled by software. There is command and data handling, telecommunications, electrical power systems control, propulsion control, guidance and navigation systems, spacecraft integrity control, thermal control and so on. The spaceship is an incredibly complex system that operates under a wide variety of hard and extreme conditions. There is the vibration stress and accelerations, radiation and cosmic rays, meteoroids and extreme temperatures. And do not forget that the system also must be fool-proof. As one of the people working on the Apollo put it, “there is always some fool that switches the contacts polarity.”

And this complex system that had to operate under a tremendous stress actually worked. Apollo did not only go to the Moon but returned safely back to Earth. Is this not an example of great engineering? Is this not an example of a great achievement of humankind?

The software for the mission was developed by the engineers of MIT under the project management of NASA and using the software development process experts from IBM. However, the success of the software development for the Apollo mission could not be attributed to the software process guidance form the IBM or to the project management of NASA. They all failed miserably. They tried and divided the system up in components and developed the software to the best standards… and it did not work. MIT were lucky in a sense that the start was delayed due to hardware problems, otherwise NASA would have to cancel it for the software problems.

it gets difficult to assign out little task groups to program part of the computer; you have to do it with a very technical team that understands all the interactions on all these things.
— D. G. Hoag interview, MIT, Cambridge, MA, by Ivan Ertel, April 29, 1966

The software was only developed because the MIT engineers got together and did it as a single system.

In the end NASA and MIT produced quality software, primarily because of the small-group nature of development at MIT and the overall dedication shown by nearly everyone associated with the Apollo program.
— Frank Hughes interview, Johnson Space Center, Houston, TX, June 2, 1983

The software for the Apollo program was failing as long as they tried to isolate the systems and components from each other. Once the engineers used the “small group”, that is they got together and worked on it as the whole system with close dependencies and full understanding, they were successful. It is not so much that they refused the oversight and process expertise but that they took a systemic, holistic view of the whole thing and they all understood what they are doing and why. Some of the corners they had to cut caused malfunctions in the flight but the pilots were prepared for those, they knew those could happen and those faults did not abort the mission.

Software is deadly

As the society progresses, it writes more and more software and creates more and more automation. We are already surrounded by software, by devices running software of one kind or another, at all times. Somehow, we still think it is all right not to care what kind of software we put out. I think it is time to understand that everything we make will end up impacting us directly in our lives. Everything is controlled by software: TV, airplanes, cars, factories, power plants. Consequences of errors will be felt by everyone, by all of us, in many cases literary on our own skin. Current methods of software development cause mass malfunction.

We screw up – people die. Some examples:

  1. Therac-25: A state of the art linear accelerator for radiation treatment. The equipment delivered lethal doses of radiation to three people due to a setup race condition in 1985.
  2. Ariane 5 rocket destroyed in 1996: Conversion of velocity in the guidance unit from 64 bit to 16 bit overflowed. Destroyed 4 scientific satellites, cost: $500 million.
  3. Nuclear holocaust was avoided twice at the last moment because a human operator intervened, verified the automated results as false positive and prevented a strike back. The dates are: June 1980, NORAD Nuclear missile false alarm and September 1983, Soviet Nuclear missile false alarm.
  4. March 2014: Nissan recalls 990,000 cars because a software problem in the occupant classification system might not detect an occupant in the passenger seat and prevent airbag deployment.
  5. July 2014: Honda has conceded that a software glitch in electronic control units could cause cars to accelerate suddenly, forcing drivers to scramble to take emergency measures to prevent an accident. Honda Motor Co., citing software problems, announced that it is recalling 175,000 hybrid vehicles.
  6. April 2015: U.S. GAO publishes the “Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions toNextGen” report, stating that the flight control computers on board of contemporary aircraft could be susceptible for break-in and take over by using the on-board WiFi network or even from the ground.

Complex relationships in the real world – everything depends on everything – make the situation more and more dangerous. The questions of technology and ethics cannot be separated; developers must feel responsibility for what they create.

Specialization or Mastership

There is a tale that six blind men were asked to determine what an elephant looked like by feeling different parts of the elephant’s body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

Blind_monks_examining_an_elephant-900x652

All of them are right. The reason every one of them is telling it differently is because each one touched a different part of the elephant.

Our narrow-specialized view of the world is very similar to that of the blind men feeling an elephant. I see the security part of the elephant, developers see the functionality part of the elephant and none of us see the whole of the elephant. As a result, the improvement is judged on a fragment of the system, not the system as a whole. We think that if we make a larger front right leg, we will get a better elephant. Or, maybe, that’s a longer trunk that’s important. In reality, we get an ugly and malfunctioning elephant. We get a failure. Develop a function, take no account of security – get a failure. Develop a security feature, take no account of usability – get a failure. Since nobody has the holistic view, any approach to making the elephant bigger on one or another side fails. We fail on all fronts.

The absence of holistic approach that unites all of the aspects of the system and sees it in action results in complete and unavoidable failure.

This failure causes both direct financial and indirect, through the waste of resources, losses. We need to slow down and get an overview of what we are doing, each of us. We need to get some understanding of the whole. We need to figure out how everything works together. This is not a problem of documentation, or communication, or proper processes. This is the deeper problem of understanding our creations and thinking about the world. We need to be able to put it all together in our heads to be able to work out how to make the elephant better.

The agile methods proponents did a step in the right direction by saying that specialization is not necessary or even harmful for software development. Unfortunately, they did twenty steps away by saying that developers only need to understand a small chunk of code they are working on and nothing else.

Security is integral to development

So if you look at the current software development, you will notice that security is always built around the product like a fence, after the fact. First you get the function and then you get a fence of security around it. And the situation is exactly the same with other things like quality and usability. As a result, you get something like a bit of more or less coherent code with a bunch of fences around it. You are lucky if the fences end up being concentric too. The developers of different aspects of the system tend to have completely different ideas about what the system does and its intended environment.

That is not a good way of dealing with product development. We need product development that unites all aspects of the product and includes its interaction with the world. We need developers that understand the product’s function and can deal with the multitude of aspects of the product and its lifecycle. Developers and managers must understand that security is an integral part of the product and deal with it responsibly.

I notice that when I talk about the security program I created at Software AG, I invariably get the same reaction: what our security team is doing is very advanced and simply amazing. Well, for me it is not. The difference is that most companies go after security piecemeal and after the fact, while we applied the holistic approach and we introduce security to all areas of product development. Other companies simply perform some penetration testing, fix the bugs and leave it at that. We go after the development process, company policies, developer training and so on, taking the view that everything we do contributes to security or insecurity of our products. That creates a very impressive feeling of quality to what we do even though it is perfectly normal to do and expect.

Let’s start small. I want you to look back at the ancient Greek philosophy and understand the meaning of taking a holistic approach to everything you do. We need that principle, we need the holistic approach in other areas of our lives too but we need it badly now in software engineering. We need the excellence, the harmony, and the overview. Next time you do your job try considering and following a more holistic, systemic approach.

The holistic approach will allow you to make sure that whatever you do is actually correct, secure, of high quality, works as expected and is simply right for the customer. It will allow you to control the changes and innovation, external influences and impulses, while understanding what should be used and what – ignored. The holistic approach will also mean that you deliver long-term value and improvements, making finally that better elephant that the customers have been waiting for.

 

Security Forum Hagenberg 2015

sf_logoI will be talking about the philosophy in engineering or the human factor in the development of secure software at the Security Forum in Hagenberg im Mühlkreis, Austria on 22nd of April.

https://www.securityforum.at/en/

My talk will concentrate on the absence of a holistic, systemic approach in the current software development as a result of taking the scientific approach of “divide and conquer” a bit too far and applying it where it should not be.

It may seem at first that philosophy has nothing to do with software development or with security but that is not so. We, the human beings, operate on a particular philosophical basis, the basis of thinking, whether implied or explicit. Our technologies are fine, what needs change is the human that applies and develops technologies. And the change starts in the mind, with the philosophy that we apply to our understanding of the world.

Google bots subversion

There is a lot of truth in saying that every tool can be used by good and by evil. There is no point in blocking the tools themselves as the attacker will turn to new tools and subvert the very familiar tools in unexpected ways. Now Google crawler bots were turned into such a weapon to execute SQL injection attacks against websites chosen by attackers.

it_photo_76483_200x133The discussion of whether Google should or should not do anything about that is interesting but we are not going to talk about that. Instead, think that this is a prime case of a familiar tool that comes back to your website regularly subverted into doing something evil. You did not expect that to happen and you cannot just block the Google from your website. This is a perfect example of a security attack where your application security is the only way to stop the attacker.

The application must be written in such a way that it does not matter whether it is protected by a firewall – you will not always be able to block the attacks with the firewall. The application must also be written so that it withstands an unanticipated attack, something that you were not able to predict in advance would happen. The application must be prepared to ward off things that are not there yet at the time of writing. Secure design and coding cannot be replaced with firewalls and add-on filtering.

Only such securely designed and implemented applications withstand unexpected attacks.

Security Assurance vs. Quality Assurance

7033818-3d-abbild-monster-mit-investigate-linseIt is often debated how Quality assurance relates to Security assurance. I have a slightly unconventional view of the relation between the two.

You see, when we talk about the security assurance in software, I view the whole process in my head end to end. And the process runs roughly like this:

  • The designer has an idea in his head
  • The software design is a translation of that into a document
  • Development translates the design into the code
  • The code is delivered
  • Software is installed, configured and run

Security, in my view, is the process of making sure that whatever the designer was thinking about in his head ends up actually running at the customer site. The software must run exactly the way the designer imagined, that is the task.

Now, the software has to run correctly both under the normal circumstances and under really weird conditions, i.e. under attack. So the Quality Assurance takes the part of verifying that it runs correctly under normal circumstances while Security Assurance takes care of the whole picture.

Thus Quality Assurance becomes an integral part of Security Assurance.

User Data Manifesto

Having a confirmation that the governments spy on people on the Internet and have access to the private data they should not sparked some interesting initiatives. One of such interesting initiatives is the User Data Manifesto:

1. Own the data
The data that someone directly or indirectly creates belongs to the person who created it.

2. Know where the data is stored
Everybody should be able to know: where their personal data is physically stored, how long, on which server, in what country, and what laws apply.

3. Choose the storage location
Everybody should always be able to migrate their personal data to a different provider, server or their own machine at any time without being locked in to a specific vendor.

4. Control access
Everybody should be able to know, choose and control who has access to their own data to see or modify it.

5. Choose the conditions
If someone chooses to share their own data, then the owner of the data selects the sharing license and conditions.

6. Invulnerability of data
Everybody should be able to protect their own data against surveillance and to federate their own data for backups to prevent data loss or for any other reason.

7. Use it optimally
Everybody should be able to access and use their own data at all times with any device they choose and in the most convenient and easiest way for them.

8. Server software transparency
Server software should be free and open source software so that the source code of the software can be inspected to confirm that it works as specified.

Apple – is it any different?

The article “Password denied: when will Apple get serious about security?” in The Verge talks about Apple’s insecurity and blames Apple’s badly organized security and the absence of any visible security strategy and effort. Moreover, it seems like Apple is not taking security sufficiently seriously even.

“The reality is that the Apple way values usability over all else, including security,” Echoworx’s Robby Gulri told Ars.

MoneyIt is good that Apple gets a bit of bashing but are they really all that different? If you look around and read about all other companies you quickly realize it is not just Apple, it is a common, too common, problem: most companies do not take security seriously. And they have a good reason: security investment cannot be justified in short-term, it cannot be sold, and it cannot be turned into bonuses and raises for the management. And the risks are typically ignored as I already talked about previously.

So in this respect Apple simply follows in the footsteps of all other software companies out there. They invest in features, in customer experience, in brand management but they ignore the security. Even the recent scandal with Mat Honan’s life wipe-out that got a lot of publicity did not change much if anything. The company did not suffer sufficient damage to start thinking of security seriously. The damage was done to a private individual and did not translate into any impact on sales. So it demonstrated once again that security problems do not damage the bottom line. Why else would a company care?

We need the damage done to external parties to be internalized and absorbed by the companies. As long as it stays external they will not care. The same thing exactly as the ecology – ecological cost is external to the company so it would not care unless there is regulation that makes those costs internal. We need a mechanism for internalizing the security costs.

Security training – does it help?

I came across the suggestion to train (nearly) everyone in the organization in security subjects. The idea is very good, we often have this problem that the management has absolutely no knowledge or interest in security and therefore ignores the subject despite the efforts of the security experts in the company. Developers, quality, documentation, product management – they all need to be aware of the seriousness of software security for the company and recognize that sometimes the security must be given priority.

But will it help? I have spent a lot of time educating developers and managers on security. My experience is that it does not help most people. Some people get interested and involved – those are naturally inclined to take good care of all aspects of their products, including but not limited to security. Most people do not care for real. They are not interested in security, they are there to do a job and get paid. And nobody gets paid for more security.

That results repeatedly in the situations like the one described in the article:

“If internal security teams seem overly draconian in an organization, the problem may not be with the security team, but rather a lack of security and risk awareness throughout the organization.”

Unfortunately, simply informative security training is not going to change that. People tend to ignore rare risks and that is what happens to security of product development. What we need is not a security awareness course but a way to “hook” people on security, a way to make them understand, deep inside, that the security is important, not in an abstract way, but personally, to them personally. Then security will work. How do we do that?

Brainwashing in security

At first, when I read the article titled Software Security Programs May Not Be Worth the Investment for Many Companies I thought it was a joke or a prank. But then I had a feeling it was not. And it was not the 1st of April. And it seems to be a record of events at the RSA Conference. Bloody hell, that guy, John Viega from SilverSky, “an authority on software security”, is speaking in earnest.

That’s one of those people who are continuously making the state of security as miserable as it is today. His propaganda is simple: do not invest into security, it is a total waste of money.

“For most companies it’s going to be far cheaper and serve their customers a lot better if they don’t do anything [about security bugs] until something happens. You’re better off waiting for the market to pressure on you to do it.”

And following the suit was the head of security at Adobe, Brad Arkin, can you believe it? I am not surprised now we have to use things like NoScript to make sure their products do not execute in our browsers. I have a pretty good guess what Adobe and SilverSky are doing: they are covering their asses. They do the minimum required to make sure they cannot be easily sued for negligence and deliberately exposing their customers. But they do not care about you and me, they do not give a damn if their software is full of holes. And they do not deserve to be called anything that has a word ‘security’ in it.

The stuff Brad Arkin is pushing at you flies into the face of the very security best practices we swear by:

“If you’re fixing every little bug, you’re wasting the time you could’ve used to mitigate whole classes of bugs,” he said. “Manual code review is a waste of time. If you think you’re going to make your product better by having a lot of eyeballs look at a lot of code, that’s the worst use of human labor.”

No, sir, you are bullshitting us. Your company does not want to spend the money on security. Your company is greedy. Your company wants to get money from the customers for the software full of bugs and holes. You know this and you are deliberately telling lies to deceive not only the customers but even people who know a thing or two about security. But we have seen this before already and no mistake.

 

The problem is, many small companies will believe anything that is pronounced at an event like the RSA Conference and take it for granted that this is the ultimate last word in security. And that will make the security state of things even worse. We will have more of those soft underbelly products and companies that practice “security by ignorance”. And we do not want that.

The effect of security bugs can be devastating. The normal human brain is not capable of properly estimating the risks of large magnitude but rare occurrence and tends to downplay them. That’s why the risk of large security problems that can bring a company to its knees is usually discarded. But security problems can be so severe that they will put even a large company out of business, not to mention that a small company would not survive any slightly more than average impact security problem at all.

So, thanks, but no, thanks. Security is a dangerous field, it is commonly compared to a battlefield and there is some truth in that. Stop being greedy and make sure your software does not blow up.

The art of unexpected

Expecting the unexpected?I am reading through the report of a Google vulnerability on The Reg and laughing. This is a wonderful demonstration of what the attackers do to your systems – they apply the unexpected. The story is always the same. When we build a system, we try to foresee the ways in which it will be used. Then the attackers come and use it in a still unexpected way. So the whole security field can be seen as an attempt to expand the expected and narrow down the unexpected. Strangely, the unexpected is always getting bigger though. And, conversely, the art of being an attacker is an art of unexpected, shrugging off the conventions and expectations and moving into the unknown.

Our task then is to do the impossible, to protect the systems from unexpected and, largely, unexpectable. We do it in a variety of ways but we have to remember that we are on the losing side of the battle, always. There is simply no way to turn all unexpected into expected. So this is the zen of security: protect the unprotectable, expect the unexpectable and keep doing it in spite of the odds.

Posts navigation

1 2