Position yourself on Security Maturity Grid

I wrote up the Security Maturity Grid the way quality management is usually presented. The grid is a simple 5 x 6 matrix that shows different stages of maturity of the company’s security management against six different security management categories (management understanding of security, problem handling, cost of security, etc). The lowest stage of maturity is called ‘Uncertainty’ – the organisation is inexperienced, security management is a low priority and reactive, etc – then as security management matures it goes through the stages of ‘Awakening’, ‘Enlightenment’, ‘Wisdom’, then the highest level, ‘Certainty’. Each point – maturity versus category – on the grid has a brief description of how that combination appears in the company.

I keep the grid on a separate page, Security Maturity Grid, so have a look and try to position yourself or your company on the grid. Then wait for the software security goons to show up :)

dilbert-software-improvement-goons-dt050912

CAST Workshop “Secure Software Development”

7033818-3d-abbild-monster-mit-investigate-linseWe are organizing the workshop on “Secure Software Development” now for the third year in a row. As usual, the workshop is in Darmstadt and the logistics is cared for by the CAST e.V. The date for the workshop is 12 November.

This year most presentations seem to be in German, so probably it does not make much sense for non-German speaking people. But if you speak German, we have some rather interesting subjects like our experiences with vulnerability management, research into sociotechnical basis of development security and problems with developing the mobile payment infrastructure security.

The workshop is a great place for discussions and meeting various people working on security in software development. Please, come and join us on 12 November!

The human factor: philosophy and engineering

The ancient Greeks had a concept of “aretê” (/ˈærətiː/) that is usually loosely translated to English as “quality”, “excellence”, or “virtue”. It was all that and more: the term meant the ultimate and harmonious fulfillment of task, purpose, function, or even the whole life. Living up to this concept was the highest achievement one could attain in life. Unfortunately, it does not translate well into English where the necessary concept is absent.

To give an example of arete, one could consider a work of art, like a painting, or a book. We could argue infinitely about a work of art and its many aspects but the majority of people do not have any problem identifying whether a work of art is actually a masterpiece or a stupid piece of garbage. That “whatsit” that we identify in a masterpiece is the arete, the harmony of total and complete excellence of the author pouring his virtue into his work.

Raphael-Plato-and-Aristotle

Unfortunately, the science of the today’s world is not built on the same principles and, in fact, is divorcing from it further and further. It all started probably with the Aristotle and his move away from the “essence of things” being the source of everything else. Aristotle taught us, essentially, that we can understand things by splitting and analyzing them, that the “essence” of a thing could be understood from a thing itself, that we could achieve anything through the “divide and conquer” principle. That is where our scientific methods originate. Aristotle, among other philosophers, gave us the logic and foundation for all other sciences.

The scientific methods of divide and conquer are great, that’s what built this civilization, effectively, but they have a downside. They are all fine when there is a counterbalance to them but when they are taken as the only possible view of the world, when they are taken as the ultimate philosophy, they are taken to the extremes and start causing problems. It is surprising that such a far-fetched connection should have to be made, from ancient philosophy to contemporary engineering and security, but our engineering is based on how we think, so our products necessarily reflect our (apparent or obscured) philosophy.

What is the problem with the philosophy direction that started with Aristotle that we all effectively follow ever since? The problem is that it lives no place and does not require the arete, the harmonious excellence. The philosophy of today makes our thinking, and not only scientific thinking, compartmentalized. We are now used to thinking about things as completely separate from each other. We are used to dividing the world up into small chunks and operating on those small chunks one at a time, arguing that thus we are making the whole thing more manageable. We treat the world as a puzzle, investigating and tweaking one piece at a time. But we forget about the relation of the chunk we are working on to the grand scheme of things. We forget about the influences and dependencies, both in space and time. We forget that the puzzle must click together in the end.

Screenshot from 2015-04-22 10:55:04For example, when you get the quality management explained to you, you usually receive the overview of the famous “5 views of quality”. The “transcendental view” is formulated as the inherent quality obvious to the observer – “I know it when I see it”; “product-based view” provides for designing a product against benchmarks for speed, mean failure rate etc.; “user-based view” calls for satisfying the consumer preferences; “manufacturing-based view” requires conformity to user requirements; and the “value-based view” calls for design based on cost-benefit analysis. Out of all these things, the only thing that customer really sees and really cares about is the first one – the arete, the “transcendental quality”. Guess which one is completely ignored in quality management? The very same one, for a very simple reason that it is not easily broken up into pieces to be measured and “improved” on their own. And the same problem permeats all of our engineering, and especially security.

Systemic approach

That means we tend to ignore one of the cornerstones of security: systemic approach. We often come across this myth declared even from security conference stages that secure components will mean a secure system. This is the assumption that drives many a creation of systems that are claimed for this reason to be secure. Well, what a surprise: it won’t and they aren’t. This problem is well-known in the security field, especially where seriously high levels of security are involved, like smart card business. When you take two secure components and combine them, you cannot make any statement about the security of the whole based just on the security of each part. You must consider the whole thing before you can make any statements regarding the system.

Secure components are never secure unconditionally. They are secure conditionally. They are secure as long as a certain set of assumptions holds true. Once an assumption is invalid, the component is no longer secure. When we combine two secure components we create a problem of composition, where the components potentially interact in unforeseen ways. They may be secure still, but may be not. This is the case where a systemic, holistic view of the system must definitely take the upper hand.

Short time horizons

Another problem is the extreme shortening of the time horizons. Did you notice how everyone is interested only in the immediate results, immediate profits, things we can deliver, sell, buy, have, wear, eat, and drink today? It is noticeable everywhere in the society but in the software industry it has become the defining aspect of life.

When whatever we are building exists in isolation, when we need not consider the effects on the industry, technology, society… we do not need to worry about long-term results of our work. We did this “thing” and we got our bonus, that’s the end of the story. But is it?

I am sure we all came across problems that arise from someone not doing a proper job when they did not think it was worth the trouble because it was all done and gone. Yes, for whoever did it, it was done and gone, but for us, the people coming after, don’t we wish that someone was more careful, more precise, more thoughtful? Don’t we wish he had spent just a little more time making sure everything works not just, but properly?

I have a friend that works in a restaurant chain in St-Petersburg. That’s a pretty large chain and there are many things to do, of course. One thing that we talked about once was the Food Safety and Health inspection. I was surprised, frankly, at how much effort goes into the compliance with those rules. They actually do follow all of the guidance for Food Safety and perform thorough audits and certification. When I asked her why they bother with this, my friend told me that they have two very serious reasons to do so and both of them are long-term overall business risk problems. One, if someone should get a food poisoning, they would otherwise have no certifications and audit results to fall back on and they would have a hard time in court proving that they actually did follow due diligence in all matters. Two, they would lose a lot of clientele if something like this would ever happen and for an established industry with a lot of competition that could as well mean going out of business.

So, you could call that risk management, due diligence, or simply good understanding that business is not just about getting the products as cheaply as possible out of the door in the long term, the understanding that there is more to making good business than momentarily advantages. My friend has a holistic view of the business that encompasses everything that’s important for the business and that makes her and her business successful.

They could, like so many companies in our software field, take a short term view and save some money, get something quick and dirty done, but they have an understanding that this is not a sound business strategy long-term. In our field, the security is getting worse and worse and somehow we still think it is okay to think entirely in the short term, to the next release, to the next milestone. What we need is a proper long term consideration of all aspects of the products we develop and deliver for things to start changing to the better. The holistic approach to the software development may slow things down but it will bring the risk of the future collapses down for all of us.

Security prevents innovation

Another aspect of the same “faster and fancier now!” game that we encounter regularly is the “Security should not prevent innovation!” slogan. Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms for performance that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

iot-construction-c13-3

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Uncertainty principle

An interesting concept has surfaced not so long ago as an excuse for not doing anything, called the “uncertainty principle of new technology”…

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances, namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and this academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

engineering-collage

NASA Apollo Mission

Think of the Apollo mission that brought people to the Moon. Would you not say that that was a great achievement not only for the engineers but the whole world? The Apollo mission was a project that encompassed many different areas, from metallurgy to psychology, to make space travel possible.

357863main_apollo-insigniaApollo ships also had software. The software was complex and had lots of parts. The spaceship contains a lot of sensors, equipment and machinery that are controlled by software. There is command and data handling, telecommunications, electrical power systems control, propulsion control, guidance and navigation systems, spacecraft integrity control, thermal control and so on. The spaceship is an incredibly complex system that operates under a wide variety of hard and extreme conditions. There is the vibration stress and accelerations, radiation and cosmic rays, meteoroids and extreme temperatures. And do not forget that the system also must be fool-proof. As one of the people working on the Apollo put it, “there is always some fool that switches the contacts polarity.”

And this complex system that had to operate under a tremendous stress actually worked. Apollo did not only go to the Moon but returned safely back to Earth. Is this not an example of great engineering? Is this not an example of a great achievement of humankind?

The software for the mission was developed by the engineers of MIT under the project management of NASA and using the software development process experts from IBM. However, the success of the software development for the Apollo mission could not be attributed to the software process guidance form the IBM or to the project management of NASA. They all failed miserably. They tried and divided the system up in components and developed the software to the best standards… and it did not work. MIT were lucky in a sense that the start was delayed due to hardware problems, otherwise NASA would have to cancel it for the software problems.

it gets difficult to assign out little task groups to program part of the computer; you have to do it with a very technical team that understands all the interactions on all these things.
— D. G. Hoag interview, MIT, Cambridge, MA, by Ivan Ertel, April 29, 1966

The software was only developed because the MIT engineers got together and did it as a single system.

In the end NASA and MIT produced quality software, primarily because of the small-group nature of development at MIT and the overall dedication shown by nearly everyone associated with the Apollo program.
— Frank Hughes interview, Johnson Space Center, Houston, TX, June 2, 1983

The software for the Apollo program was failing as long as they tried to isolate the systems and components from each other. Once the engineers used the “small group”, that is they got together and worked on it as the whole system with close dependencies and full understanding, they were successful. It is not so much that they refused the oversight and process expertise but that they took a systemic, holistic view of the whole thing and they all understood what they are doing and why. Some of the corners they had to cut caused malfunctions in the flight but the pilots were prepared for those, they knew those could happen and those faults did not abort the mission.

Software is deadly

As the society progresses, it writes more and more software and creates more and more automation. We are already surrounded by software, by devices running software of one kind or another, at all times. Somehow, we still think it is all right not to care what kind of software we put out. I think it is time to understand that everything we make will end up impacting us directly in our lives. Everything is controlled by software: TV, airplanes, cars, factories, power plants. Consequences of errors will be felt by everyone, by all of us, in many cases literary on our own skin. Current methods of software development cause mass malfunction.

We screw up – people die. Some examples:

  1. Therac-25: A state of the art linear accelerator for radiation treatment. The equipment delivered lethal doses of radiation to three people due to a setup race condition in 1985.
  2. Ariane 5 rocket destroyed in 1996: Conversion of velocity in the guidance unit from 64 bit to 16 bit overflowed. Destroyed 4 scientific satellites, cost: $500 million.
  3. Nuclear holocaust was avoided twice at the last moment because a human operator intervened, verified the automated results as false positive and prevented a strike back. The dates are: June 1980, NORAD Nuclear missile false alarm and September 1983, Soviet Nuclear missile false alarm.
  4. March 2014: Nissan recalls 990,000 cars because a software problem in the occupant classification system might not detect an occupant in the passenger seat and prevent airbag deployment.
  5. July 2014: Honda has conceded that a software glitch in electronic control units could cause cars to accelerate suddenly, forcing drivers to scramble to take emergency measures to prevent an accident. Honda Motor Co., citing software problems, announced that it is recalling 175,000 hybrid vehicles.
  6. April 2015: U.S. GAO publishes the “Air Traffic Control: FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions toNextGen” report, stating that the flight control computers on board of contemporary aircraft could be susceptible for break-in and take over by using the on-board WiFi network or even from the ground.

Complex relationships in the real world – everything depends on everything – make the situation more and more dangerous. The questions of technology and ethics cannot be separated; developers must feel responsibility for what they create.

Specialization or Mastership

There is a tale that six blind men were asked to determine what an elephant looked like by feeling different parts of the elephant’s body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

Blind_monks_examining_an_elephant-900x652

All of them are right. The reason every one of them is telling it differently is because each one touched a different part of the elephant.

Our narrow-specialized view of the world is very similar to that of the blind men feeling an elephant. I see the security part of the elephant, developers see the functionality part of the elephant and none of us see the whole of the elephant. As a result, the improvement is judged on a fragment of the system, not the system as a whole. We think that if we make a larger front right leg, we will get a better elephant. Or, maybe, that’s a longer trunk that’s important. In reality, we get an ugly and malfunctioning elephant. We get a failure. Develop a function, take no account of security – get a failure. Develop a security feature, take no account of usability – get a failure. Since nobody has the holistic view, any approach to making the elephant bigger on one or another side fails. We fail on all fronts.

The absence of holistic approach that unites all of the aspects of the system and sees it in action results in complete and unavoidable failure.

This failure causes both direct financial and indirect, through the waste of resources, losses. We need to slow down and get an overview of what we are doing, each of us. We need to get some understanding of the whole. We need to figure out how everything works together. This is not a problem of documentation, or communication, or proper processes. This is the deeper problem of understanding our creations and thinking about the world. We need to be able to put it all together in our heads to be able to work out how to make the elephant better.

The agile methods proponents did a step in the right direction by saying that specialization is not necessary or even harmful for software development. Unfortunately, they did twenty steps away by saying that developers only need to understand a small chunk of code they are working on and nothing else.

Security is integral to development

So if you look at the current software development, you will notice that security is always built around the product like a fence, after the fact. First you get the function and then you get a fence of security around it. And the situation is exactly the same with other things like quality and usability. As a result, you get something like a bit of more or less coherent code with a bunch of fences around it. You are lucky if the fences end up being concentric too. The developers of different aspects of the system tend to have completely different ideas about what the system does and its intended environment.

That is not a good way of dealing with product development. We need product development that unites all aspects of the product and includes its interaction with the world. We need developers that understand the product’s function and can deal with the multitude of aspects of the product and its lifecycle. Developers and managers must understand that security is an integral part of the product and deal with it responsibly.

I notice that when I talk about the security program I created at Software AG, I invariably get the same reaction: what our security team is doing is very advanced and simply amazing. Well, for me it is not. The difference is that most companies go after security piecemeal and after the fact, while we applied the holistic approach and we introduce security to all areas of product development. Other companies simply perform some penetration testing, fix the bugs and leave it at that. We go after the development process, company policies, developer training and so on, taking the view that everything we do contributes to security or insecurity of our products. That creates a very impressive feeling of quality to what we do even though it is perfectly normal to do and expect.

Let’s start small. I want you to look back at the ancient Greek philosophy and understand the meaning of taking a holistic approach to everything you do. We need that principle, we need the holistic approach in other areas of our lives too but we need it badly now in software engineering. We need the excellence, the harmony, and the overview. Next time you do your job try considering and following a more holistic, systemic approach.

The holistic approach will allow you to make sure that whatever you do is actually correct, secure, of high quality, works as expected and is simply right for the customer. It will allow you to control the changes and innovation, external influences and impulses, while understanding what should be used and what – ignored. The holistic approach will also mean that you deliver long-term value and improvements, making finally that better elephant that the customers have been waiting for.

 

Workshop on Agile Development of Secure Software (ASSD’15)

Call for Papers:

First International Workshop on Agile Development of Secure Software (ASSD’15)

ARES7_2in conjunction with the 10th International Conference on Availability, Reliability and Security (ARES’15) August 24-28, 2015, Université Paul Sabatier, Toulouse, France

Submission Deadline: April 15, 2015

Workshop website:

http://www.ares-conference.eu/conference/workshops/assd-2015/

Scope

Most organizations use the agile software development methods, such as Scrum and XP for developing their software. Unfortunately, the agile software development methods are not well suited for the development of secure systems; they allow change of requirements, prefer frequent deliveries, use lightweight documentation, and their practices do not include security engineering activities. These characteristics limit their use for developing secure software. For instance, they do not consider conflicting security requirements that emerge in different iterations.

The goal of the workshop is to bring together security and software development researchers to share their finding, experiences, and positions about developing secure software using the agile methods. The workshop aims to encourage the use of scientific methods to investigate the challenges related to the use of the agile approach to develop secure software. It aims also to increase the communication between security researchers and software development researchers to enable the development of techniques and best practices for developing secure software using the agile methods.

 Topics of interest

The list of topics that are relevant to the ASSD workshop includes the following, but is not limited to:

  • Challenges for agile development of secure software
  • Processes for agile development of secure software
  • Incremental development of cyber-physical systems
  • Secure software development training and education
  • Tools supporting incremental secure software development
  • Usability of agile secure software development
  • Security awareness for software developers
  • Security metrics for agile development
  • Security and robustness testing in agile development

 Important dates

Submission Deadline:     April 15, 2015

Author Notification:        May 11, 2015

Proceedings version:      June 8, 2015

Conference:                       August 24-28, 2015

Sony 2014 network breach, the most interesting question remains unanswered

The November 2014 breach of security at Sony Corporation remains the subject of conversation throughout the end of the year. Many interesting details have become known while even more remains hidden. Most claims and discussions only serve to create noise and diversion though.

Take the recent discussion of the antivirus software, for example. Sony Corporation uses antivirus software internally, it’s Norton, TrendMicro or McAfee depending on the model and country (Sony uses Vaio internally). So I would not put much stock into the claims of any of the competitors in the antivirus software market that their software would have stopped the attackers. And it’s irrelevant anyway. The breach was so widespread and the attackers had such totality of control that no single tool would have been enough.

The most interesting question remains unanswered though. Why did the attackers decide to reveal themselves? They were in the Sony networks for a long time, they extracted terabytes of information. What made them go for a wipeout and publicity?

Was publicity a part of a planned operation? Were the attackers detected? Were they accidentally locked out of some systems?

What happened is a very important question because in the former case the publicity is a part of the attack and the whole thing is much bigger than just a network break-in. In the latter cases Sony is lucky and it was then indeed “just” a security problem and an opportunistic break-in.

Any security specialist should be interested to know that bigger picture. Sony should be interested most of all, of course. For them, it’s a matter of survival. Given their miserable track record in security, I doubt they are able to answer this question internally though. So it’s up to the security community, whether represented by specialist companies or by researchers online, to answer this most important question. If they can.

a-colored-version-of-the-big-wave

ENISA published new guidelines on cryptography

eu-data-protectionEuropean Union Agency for Network and Information Security (ENISA) has published the cryptographic guidelines “Algorithms, key size and parameters” 2014 as an update to the 2013 report. This year, the report has been extended to include a section on hardware and software side-channels, random number generation, and key life cycle management. The part of the previous report concerning protocols has been extended and converted to a separate report “Study on cryptographic protocols“.

The reports together provide a wealth of information and clear recommendations for any organization that uses cryptography. Plenty of references are served and the document is a great starting point for both design and analysis.

Three roads to product security

three-roadsI mentioned previously that there are three ways to secure a product from the point of view of a product manufacturing company. Here is a little more detailed explanation. This is my personal approach to classifying product security and you do not have to stick to this but I find it useful when creating or upgrading company’s security. I call these broad categories the “certification”, “product security” and “process security” approach. Bear in mind that my definition of security is also much broader than conventional.

The first approach is the simplest. You outsource your product security to another company. That external company, usually a security laboratory, will check your product’s security including as many aspects as necessary for a set target level of security assurance and will vouch for your product to your clients. This does not have to be as complicated and formal as the famous Common Criteria certification. This certification may be completely informal but it will provide a level of security assurance to your clients based on the following parameters: in how far the customers trust the lab, what was the target security level set for the audit and how well the product has fared. Some financial institutions will easily recognize the scheme because they often use a trusted security consultancy to look into the security of products supplied to them.

Now, this approach is fine and it allows you to keep the security outside with the specialists. There are of course a few problems with this approach too. Main problems are that it may be very costly, especially when trying to scale up, and it usually does not improve the security inside the company that makes the product.

So, if the company desires to build security awareness and plans to provide more than a single secure product, it is recommended that a more in-house security approach is chosen. Again, the actual expertise may come from outside, but the company in the following two approaches actually changes internally to provide a higher degree of security awareness.

One way is to use what I call “product security”. This is when you take a product and try to make it as secure as required without actually looking at the rest of the company. You only change those parts of the production process that directly impact the security and leave alone everything else. This approach is very well described by the “Common Criteria” standard. We usually use the Common Criteria for security evaluations and certifications but this is not required. You may simply use the standard as a guideline to your own implementation of the security in your products according to your own ideas of the level of security you wish to achieve. However, Common Criteria is an excellent guide that builds on the experience of many security professionals and can be safely named the only definitive guide to product security in the current world.

Anyway, in the “product security” approach you will only be changing things that relate directly to the product you are trying to secure. That means that there will be little to no impact on the security of other products but you will have one secure product in the end. Should you wish to make a second secure product, you will apply the same.

Now, of course, if you want to make all products secure it makes sense to apply something else, what I call “process security”. You would go and set up a security program that makes sure that certain processes are correctly executed, certain checks are performed, certain rules are respected and all of that together will give you an increase in security of all of your products across the company. Here we are seeing an orthogonal approach where you will not necessarily reach the required level of security very fast but you will be improving the security of everything gradually and equally.

This “process security” approach is well defined in the OpenSAMM methodology that could be used as a basis for the implementation of security inside the company. Again, OpenSAMM can be used for audits and certifications but you may use it as a guide to your own implementation. Take the parts that you think you need and adapt to your own situation.

The “process security” takes the broad approach and increases the security gradually across the board while the “product security” will deliver you quickly a single secure product with improvements to other products being incidental. A mix of the two is also possible, depending on priorities.

process-product-security

Secure the future – have a change of mind!

guard_cat_on_dutyThe future of the enterprise can be secured provided that it is properly organized and operated with full understanding of its economics. The current concentration on “profit here and now” is extremely harmful to the survival of the economy of the world as a whole and every given enterprise in particular.

Why is that? There are two parts to the problem. The first part has to do with the short-sightedness of the typical management of the companies and the second part – with the isolation of company parts from each other and the requirement that everything brings profit by itself. Under these conditions the security becomes an unwanted “fifth leg” that brings nothing but unjustifiable costs to the company. I tried to find a solution within this extremely limited view and there ain’t any. However, the situation looks completely different if you take a long-term systemic view of the enterprise.

In the long term, we absolutely need security as we need quality and many other things besides money to ensure that the enterprise survives. Once we understand that, we shall realize that we already have the knowledge, technology and tools to actually secure our products and we will apply the research where we see them lacking.

To illustrate, let’s look at how the simple economic model of the well-known game “Civilization” operates.

“Civilization” is a strategic game with a simplified economic model of cities, countries and the world. In this highly simplified model of the economy, describing the behavior of an entire civilization, the parameter “money” is not the only one that leads to success but rather it is used to serve other areas of society. For example, when you build a library you go to the cashier and convert money to scientific knowledge. The theater is also not built for profit but for spending money on the culture. Almost all of the buildings that do not bear a direct destination “hack loot” represent a direct loss: football stadiums, churches, and tank factories – those just consume money, not make profit, but instead they produce something else: contentment for people, culture, or the tanks.

civ-v-screen

In principle, you can try to concentrate everything in the world on getting more money – but experienced players will tell you that this option is only meaningful on the finishing spurt – when there is a race to win, when you are actually in the military conditions “it’s either us or them.” At other times, you can not ignore any sections of public life – it is necessary to make sure that the culture is taken care of and the science is at a level not far behind (so that foreign tanks don’t overwhelm your chariots), and your production facilities allow you to produce anything you might need, and that the cash account allows to support the whole caboodle.

Once again, it is important to note that most of the objects in Civilization are obviously unprofitable and that’s fine – they give non-monetary income and in most cases they determine the success or downfall of the player. You build a theater, a library or a tank, pay for them and don’t complain that they need money. Money is produced by special objects replenishing the treasury – they are important, of course, as an integral part of society but their main role in the game is to support the work of other objects – let the society work and move forward the progress, culture, carry the flag of the country. Only in a single case it makes sense to be “in the money” – when you want to win politics through buying of votes from neutral city-states. In all other cases, a large cash balance, on the contrary, is rather an indication that you are doing something wrong.

civ-attack

So, why are we talking about that? Money in Civilization is a tool and that what it should be in real life, at least in theory. Therefore, if you have excess money, it is best to invest immediately into something that moves forward some real aspects of life – culture pushes the boundaries of your country, science is discovering all the new electric cars, cavalry and navy are bringing the light of truth to infidels. Since everything around is continually evolving, then the funds should be regularly put into circulation – not in the sense of “revolve in the bank” but through investments in the real sector – because conventional 100 coins in the ancient world is not the same as even in the era of feudalism, even in the absence of inflation. Just to save money has no special meaning – it means that you could invest it in any business but did not – for example, you could mount an expedition to another continent but instead you are wasting away over your gold. Yes, the money can be useful to respond to changes in the situation in a rush – but that usually does not come with a huge effectiveness; for example, you can immediately buy up a bunch of soldiers in the case of the Mongol invasion; but if you act wisely, it is much more effective – including in monetary terms – to prepare them in advance; albeit soldiers are all loss and no profit, yes.

In the real world, it is much more complicated. Yet, somehow it turns out that in a simplified toy world simulator “father of the nation” the different effects of a particular aspect of human activity are taken into account, while in our advanced and such a diverse modern society, it all comes down to one parameter – money. Look at what is happening in the world or in your company – the terms are reductions of this and that, because of the “inefficiency”.

In purely totalitarian economies societies somehow engage in culture, science and other things, and only in our purely “liberal” economy and culture, we force the culture, science, and almost the military … to make money. But, after all, this is nonsense in terms of governance!

There seem to be two important aspects at work:

1) The atomization of society and the economy also applies to the enterprise. In a singular society and company things can be divided into “earning” parts and “wasters” of money, as was done in the traditional family – husband works in the field, a wife at home on the farm, and that’s fine. Under the conditions of atomization one is forced to survive as best one can. The science and culture in the society and security and quality in the company are forced to earn profits, losing their original essence. Every single part is required to perform, basically, all of the elements of the whole without any regard to its original purpose to survive. The security department now has to “sell” its services, engage in marketing campaigns and calculate its “efficiencies”.

2) Extremely short time horizon has become the norm. Where the top management was supposed to keep a very long-term perspective and support the activities that would cause the company to exist in the distant future, now we are dealing with a non-stop pressure to deliver everything today.

In general, the reduction of all aspects of life and work to make a profit in the monetary sense immediately leads to many fun things.

There are many aspects to our work as a software company producing and selling software products but if we simplify the model we can say that there are a few factors that are involved in long term survival and prosperity of the company. One of the factors is the features of the software. That is your “money production” part, the thing that gets software sold and brings in the money. Too much concentration on this part is dangerous, however.

There are other important parts. We will live aside many of them for the purposes of simplicity. Let’s look at the quality. Ensuring the software quality is pure cost, it does not sell as such, it does not bring money. Should we stop spending money on quality? You would be right to assume that we will not. But why? Because the quality of our product influences the future sales, it is not here-and-now but in the future that we will see indirect benefits, often not quantifiable. Still most of us understand that destroying the product quality will lead to deterioration of the market sales, company image, decline of revenues and eventual crumble of the company. So somehow over the years we realized that a completely non-profitable activity is necessary for the enterprise survival.

The same applies to security. Most companies ignore security nowadays. Security is nothing but cost and costs even more than quality. Security is even less visible and its impact is even further in the future. Many managers show short-sightedness and ignore security to concentrate on what brings money in today and tomorrow. But is that a good idea? Security is like your army in “Civilization” – it is pure cost and you may never actually use it directly but it is a good idea to have it unless you want to see your cities overrun by the American war chariots. Security is a cost that an enterprise must take on to ensure its long-term survival. It is as necessary as other costly things – quality, specialist training, research etc.

So when a company puts the security in a position where the security department has to justify its existence by proving with numbers in hand that they are somehow “profitable” – that’s pure lunacy on the part of top management. This concentration on the “money aspect” is going to pay off in the short term but will learn to a crash in the long term. The balance is as essential to a healthy company as it is essential to an empire in the game of “Civilization”. One cannot ignore the money aspect and risk running out of money at an unfortunate moment. One cannot concentrate on money and ignore everything else either. We must accept that security is one of the realities of life and it is necessary to have because otherwise “their tanks will crash our chariots”.

I hope we are clear on that now.

You may only need a sword once but you must carry it every day.
– Japanese proverb

Kinkakuji_Temple_Kyoto_Japan

Cheap security in real life?

Security concerns are on the rise, companies are beginning to worry about the software they use. I received again a question that bears answering for all the people and all the companies out there because this is a situation that happens often nowadays. So here is my answer to the question that can be formulated thus:

“We are making a software product and our customer became interested in security. They are asking questions and offer to audit our code. We never did anything specifically for security so we worry what they might find in our code. How can we convince the customer that our product security is ok?”

There are, basically, three approaches to demonstrating your product security if we take that question as meaning “how can we make sure our software is secure?” Unfortunately, the question is not meant that way. Unfortunately, the company producing the software is not interested in security and the meaning of the question is rather “how can we make the customer get off our backs while we keep producing insecure software?”

Thatkey under mat boils down to the switch from “security by ignorance” to “security by obscurity”, as I explained in one of my earlier posts titled “Security by …”. That is, of course, the cheapest possible solution in the short run. However, it does not eliminate the risk of company suddenly going bankrupt due to a catastrophic security breach in one of its products. Sony Corporation lost $190 million during their PlayStation Network hiccup a few years back. Can your company survive this kind of sudden loss? Would it not be better to invest a few hundred thousand in product security to ensure the continuity of your business in the long run?

But nevertheless the question was asked and what is a company to do when it is not willing to invest in a security program? When a company insists on a quick and dirty fix?

The advice in this case is to go along with your customer’s wishes. They want to audit your code? Excellent, take the opportunity. A code audit, or, rather, they more likely mean “white box penetration testing” in this case, is an expensive effort. At a bare minimum, you are looking at about forty man-hours of skilled labor, or fifty thousand euro or dollar net expense. Are they willing to expend that kind of money for you? Great. Take them up on their offer.

Oh, of course, they will find all sorts of bugs, security holes and simply ugly code. Take it all in with thanks and fix it pronto. You will get a loyal customer and a reputation for, frankly, the work that you should have done from the beginning anyhow.

Now, the important thing is going to be the handling of those findings. Here many companies go wrong. It is not good enough for your business to just fix the reported problems. Studies show that the developers never learn. That means your next release will have all those problems in the code again.

To do things right, you must set a special task to your development or testing – they must find ways to discover those problems during the development cycle. They must be able to discover the types of problems that were reported to you before they can be detected by the customer. Then, your code will improve and you will be able to lower the effort to fix for the next time.

And there you are, a quick and dirty fix, as promised. Just don’t fall for the fallacy of thinking that you have security now. You don’t. To get proper security into your products and life cycle will take a different order of effort.

Posts navigation

1 2 3