• #security on software development security and web security, security best practices and discussions, break-ins and countermeasures. Everything you ever wanted to know about software security but were afraid to ask, for fear of not understanding the answer!

The human factor: philosophy and engineering

The ancient Greeks had a concept of “aretê” (/ˈærətiː/) that is usually loosely translated to English as “quality”, “excellence”, or “virtue”. It was all that and more: the term meant the ultimate and harmonious fulfillment of task, purpose, function, or even the whole life. Living up to this concept was the highest achievement one could attain in life. Unfortunately, it does not translate well into English where the necessary concept is absent.

To give an example of arete, one could consider a work of art, like a painting, or a book. We could argue infinitely about a work of art and its many aspects but the majority of people do not have any problem identifying whether a work of art is actually a masterpiece or a stupid piece of garbage. That “whatsit” that we identify in a masterpiece is the arete, the harmony of total and complete excellence of the author pouring his virtue into his work.

Raphael-Plato-and-Aristotle

Unfortunately, the science of the today’s world is not built on the same principles and, in fact, is divorcing from it further and further. It all started probably with the Aristotle and his move away from the “essence of things” being the source of everything else. Aristotle taught us, essentially, that we can understand things by splitting and analyzing them, that the “essence” of a thing could be understood from a thing itself, that we could achieve anything through the “divide and conquer” principle. That is where our scientific methods originate. Aristotle, among other philosophers, gave us the logic and foundation for all other sciences.

The scientific methods of divide and conquer are great, that’s what built this civilization, effectively, but they have a downside. They are all fine when there is a counterbalance to them but when they are taken as the only possible view of the world, when they are taken as the ultimate philosophy, they are taken to the extremes and start causing problems. It is surprising that such a far-fetched connection should have to be made, from ancient philosophy to contemporary engineering and security, but our engineering is based on how we think, so our products necessarily reflect our (apparent or obscured) philosophy.

What is the problem with the philosophy direction that started with Aristotle that we all effectively follow ever since? The problem is that it lives no place and does not require the arete, the harmonious excellence. The philosophy of today makes our thinking, and not only scientific thinking, compartmentalized. We are now used to thinking about things as completely separate from each other. We are used to dividing the world up into small chunks and operating on those small chunks one at a time, arguing that thus we are making the whole thing more manageable. We treat the world as a puzzle, investigating and tweaking one piece at a time. But we forget about the relation of the chunk we are working on to the grand scheme of things. We forget about the influences and dependencies, both in space and time. We forget that the puzzle must click together in the end.

Screenshot from 2015-04-22 10:55:04For example, when you get the quality management explained to you, you usually receive the overview of the famous “5 views of quality”. The “transcendental view” is formulated as the inherent quality obvious to the observer – “I know it when I see it”; “product-based view” provides for designing a product against benchmarks for speed, mean failure rate etc.; “user-based view” calls for satisfying the consumer preferences; “manufacturing-based view” requires conformity to user requirements; and the “value-based view” calls for design based on cost-benefit analysis. Out of all these things, the only thing that customer really sees and really cares about is the first one – the arete, the “transcendental quality”. Guess which one is completely ignored in quality management? The very same one, for a very simple reason that it is not easily broken up into pieces to be measured and “improved” on their own. And the same problem permeats all of our engineering, and especially security.

Systemic approach

That means we tend to ignore one of the cornerstones of security: systemic approach. We often come across this myth declared even from security conference stages that secure components will mean a secure system. This is the assumption that drives many a creation of systems that are claimed for this reason to be secure. Well, what a surprise: it won’t and they aren’t. This problem is well-known in the security field, especially where seriously high levels of security are involved, like smart card business. When you take two secure components and combine them, you cannot make any statement about the security of the whole based just on the security of each part. You must consider the whole thing before you can make any statements regarding the system.

Secure components are never secure unconditionally. They are secure conditionally. They are secure as long as a certain set of assumptions holds true. Once an assumption is invalid, the component is no longer secure. When we combine two secure components we create a problem of composition, where the components potentially interact in unforeseen ways. They may be secure still, but may be not. This is the case where a systemic, holistic view of the system must definitely take the upper hand.

Short time horizons

Another problem is the extreme shortening of the time horizons. Did you notice how everyone is interested only in the immediate results, immediate profits, things we can deliver, sell, buy, have, wear, eat, and drink today? It is noticeable everywhere in the society but in the software industry it has become the defining aspect of life.

When whatever we are building exists in isolation, when we need not consider the effects on the industry, technology, society… we do not need to worry about long-term results of our work. We did this “thing” and we got our bonus, that’s the end of the story. But is it?

I am sure we all came across problems that arise from someone not doing a proper job when they did not think it was worth the trouble because it was all done and gone. Yes, for whoever did it, it was done and gone, but for us, the people coming after, don’t we wish that someone was more careful, more precise, more thoughtful? Don’t we wish he had spent just a little more time making sure everything works not just, but properly?

I have a friend that works in a restaurant chain in St-Petersburg. That’s a pretty large chain and there are many things to do, of course. One thing that we talked about once was the Food Safety and Health inspection. I was surprised, frankly, at how much effort goes into the compliance with those rules. They actually do follow all of the guidance for Food Safety and perform thorough audits and certification. When I asked her why they bother with this, my friend told me that they have two very serious reasons to do so and both of them are long-term overall business risk problems. One, if someone should get a food poisoning, they would otherwise have no certifications and audit results to fall back on and they would have a hard time in court proving that they actually did follow due diligence in all matters. Two, they would lose a lot of clientele if something like this would ever happen and for an established industry with a lot of competition that could as well mean going out of business.

So, you could call that risk management, due diligence, or simply good understanding that business is not just about getting the products as cheaply as possible out of the door in the long term, the understanding that there is more to making good business than momentarily advantages. My friend has a holistic view of the business that encompasses everything that’s important for the business and that makes her and her business successful.

They could, like so many companies in our software field, take a short term view and save some money, get something quick and dirty done, but they have an understanding that this is not a sound business strategy long-term. In our field, the security is getting worse and worse and somehow we still think it is okay to think entirely in the short term, to the next release, to the next milestone. What we need is a proper long term consideration of all aspects of the products we develop and deliver for things to start changing to the better. The holistic approach to the software development may slow things down but it will bring the risk of the future collapses down for all of us.

Security prevents innovation

Another aspect of the same “faster and fancier now!” game that we encounter regularly is the “Security should not prevent innovation!” slogan. Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms for performance that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

iot-construction-c13-3

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Uncertainty principle

An interesting concept has surfaced not so long ago as an excuse for not doing anything, called the “uncertainty principle of new technology”…

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances, namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and this academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

engineering-collage

NASA Apollo Mission

Think of the Apollo mission that brought people to the Moon. Would you not say that that was a great achievement not only for the engineers but the whole world? The Apollo mission was a project that encompassed many different areas, from metallurgy to psychology, to make space travel possible.

357863main_apollo-insigniaApollo ships also had software. The software was complex and had lots of parts. The spaceship contains a lot of sensors, equipment and machinery that are controlled by software. There is command and data handling, telecommunications, electrical power systems control, propulsion control, guidance and navigation systems, spacecraft integrity control, thermal control and so on. The spaceship is an incredibly complex system that operates under a wide variety of hard and extreme conditions. There is the vibration stress and accelerations, radiation and cosmic rays, meteoroids and extreme temperatures. And do not forget that the system also must be fool-proof. As one of the people working on the Apollo put it, “there is always some fool that switches the contacts polarity.”

And this complex system that had to operate under a tremendous stress actually worked. Apollo did not only go to the Moon but returned safely back to Earth. Is this not an example of great engineering? Is this not an example of a great achievement of humankind?

The software for the mission was developed by the engineers of MIT under the project management of NASA and using the software development process experts from IBM. However, the success of the software development for the Apollo mission could not be attributed to the software process guidance form the IBM or to the project management of NASA. They all failed miserably. They tried and divided the system up in components and developed the software to the best standards… and it did not work. MIT were lucky in a sense that the start was delayed due to hardware problems, otherwise NASA would have to cancel it for the software problems.

it gets difficult to assign out little task groups to program part of the computer; you have to do it with a very technical team that understands all the interactions on all these things.
— D. G. Hoag interview, MIT, Cambridge, MA, by Ivan Ertel, April 29, 1966

The software was only developed because the MIT engineers got together and did it as a single system.

In the end NASA and MIT produced quality software, primarily because of the small-group nature of development at MIT and the overall dedication shown by nearly everyone associated with the Apollo program.
— Frank Hughes interview, Johnson Space Center, Houston, TX, June 2, 1983

The software for the Apollo program was failing as long as they tried to isolate the systems and components from each other. Once the engineers used the “small group”, that is they got together and worked on it as the whole system with close dependencies and full understanding, they were successful. It is not so much that they refused the oversight and process expertise but that they took a systemic, holistic view of the whole thing and they all understood what they are doing and why. Some of the corners they had to cut caused malfunctions in the flight but the pilots were prepared for those, they knew those could happen and those faults did not abort the mission.

Software is deadly

As the society progresses, it writes more and more software and creates more and more automation. We are already surrounded by software, by devices running software of one kind or another, at all times. Somehow, we still think it is all right not to care what kind of software we put out. I think it is time to understand that everything we make will end up impacting us directly in our lives. Everything is controlled by software: TV, airplanes, cars, factories, power plants. Consequences of errors will be felt by everyone, by all of us, in many cases literary on our own skin. Current methods of software development cause mass malfunction.

We screw up – people die. Some examples:

  1. Therac-25: A state of the art linear accelerator for radiation treatment. The equipment delivered lethal doses of radiation to three people due to a setup race condition in 1985.
  2. Ariane 5 rocket destroyed in 1996: Conversion of velocity in the guidance unit from 64 bit to 16 bit overflowed. Destroyed 4 scientific satellites, cost: $500 million.
  3. Nuclear holocaust was avoided twice at the last moment because a human operator intervened, verified the automated results as false positive and prevented a strike back. The dates are: June 1980, NORAD Nuclear missile false alarm and September 1983, Soviet Nuclear missile false alarm.
  4. March 2014: Nissan recalls 990,000 cars because a software problem in the occupant classification system might not detect an occupant in the passenger seat and prevent airbag deployment.
  5. July 2014: Honda has conceded that a software glitch in electronic control units could cause cars to accelerate suddenly, forcing drivers to scramble to take emergency measures to prevent an accident. Honda Motor Co., citing software problems, announced that it is recalling 175,000 hybrid vehicles.
  6. April 2015: U.S. GAO publishes the “Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions toNextGen” report, stating that the flight control computers on board of contemporary aircraft could be susceptible for break-in and take over by using the on-board WiFi network or even from the ground.

Complex relationships in the real world – everything depends on everything – make the situation more and more dangerous. The questions of technology and ethics cannot be separated; developers must feel responsibility for what they create.

Specialization or Mastership

There is a tale that six blind men were asked to determine what an elephant looked like by feeling different parts of the elephant’s body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

Blind_monks_examining_an_elephant-900x652

All of them are right. The reason every one of them is telling it differently is because each one touched a different part of the elephant.

Our narrow-specialized view of the world is very similar to that of the blind men feeling an elephant. I see the security part of the elephant, developers see the functionality part of the elephant and none of us see the whole of the elephant. As a result, the improvement is judged on a fragment of the system, not the system as a whole. We think that if we make a larger front right leg, we will get a better elephant. Or, maybe, that’s a longer trunk that’s important. In reality, we get an ugly and malfunctioning elephant. We get a failure. Develop a function, take no account of security – get a failure. Develop a security feature, take no account of usability – get a failure. Since nobody has the holistic view, any approach to making the elephant bigger on one or another side fails. We fail on all fronts.

The absence of holistic approach that unites all of the aspects of the system and sees it in action results in complete and unavoidable failure.

This failure causes both direct financial and indirect, through the waste of resources, losses. We need to slow down and get an overview of what we are doing, each of us. We need to get some understanding of the whole. We need to figure out how everything works together. This is not a problem of documentation, or communication, or proper processes. This is the deeper problem of understanding our creations and thinking about the world. We need to be able to put it all together in our heads to be able to work out how to make the elephant better.

The agile methods proponents did a step in the right direction by saying that specialization is not necessary or even harmful for software development. Unfortunately, they did twenty steps away by saying that developers only need to understand a small chunk of code they are working on and nothing else.

Security is integral to development

So if you look at the current software development, you will notice that security is always built around the product like a fence, after the fact. First you get the function and then you get a fence of security around it. And the situation is exactly the same with other things like quality and usability. As a result, you get something like a bit of more or less coherent code with a bunch of fences around it. You are lucky if the fences end up being concentric too. The developers of different aspects of the system tend to have completely different ideas about what the system does and its intended environment.

That is not a good way of dealing with product development. We need product development that unites all aspects of the product and includes its interaction with the world. We need developers that understand the product’s function and can deal with the multitude of aspects of the product and its lifecycle. Developers and managers must understand that security is an integral part of the product and deal with it responsibly.

I notice that when I talk about the security program I created at Software AG, I invariably get the same reaction: what our security team is doing is very advanced and simply amazing. Well, for me it is not. The difference is that most companies go after security piecemeal and after the fact, while we applied the holistic approach and we introduce security to all areas of product development. Other companies simply perform some penetration testing, fix the bugs and leave it at that. We go after the development process, company policies, developer training and so on, taking the view that everything we do contributes to security or insecurity of our products. That creates a very impressive feeling of quality to what we do even though it is perfectly normal to do and expect.

Let’s start small. I want you to look back at the ancient Greek philosophy and understand the meaning of taking a holistic approach to everything you do. We need that principle, we need the holistic approach in other areas of our lives too but we need it badly now in software engineering. We need the excellence, the harmony, and the overview. Next time you do your job try considering and following a more holistic, systemic approach.

The holistic approach will allow you to make sure that whatever you do is actually correct, secure, of high quality, works as expected and is simply right for the customer. It will allow you to control the changes and innovation, external influences and impulses, while understanding what should be used and what – ignored. The holistic approach will also mean that you deliver long-term value and improvements, making finally that better elephant that the customers have been waiting for.

 

Comments List

Dan2015-04-22 17:04 /

Thank you!

Sven Türpe2015-06-07 17:22 /

The following rant may interest you: https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/

Albert Zenkoff2015-06-16 16:31 /

Fantastic. A great summary of the whole Agile and Scrum scam that is. I have an article waiting to be finished on the subject, I will add to the points already made that the Agile is a deliberate scam on the part of the shadier part of the industry to get cheap engineering en masse, I think.

Sven Türpe2015-06-25 18:07 /

Another interesting & related post (in German): http://blog.fefe.de/?ts=ab72d7ef

Leave a Reply

Your email address will not be published. Required fields are marked *