Security by …

We know several common buzzwords for determining security strategy of a company (or an individual). Let’s try to define them once again, for completeness sake.

  • Security by ignorance
    Easily summed up by “what you do not know cannot hurt you” and is obviously wrong. Typically happens at the early stages of software developer’s career when a person is uneducated about security and simply does not know any better. Blissful ignorance usually ends rather abruptly with a cold shower of security education or a security breach.
  • Security by obscurity
    The typical position of most software companies, hiding the secrets somewhere they themselves would not find oblivious to the fact that thieves typically know very well where you stash your money and jewelry. This is an actively negative position asking for trouble that does not take too long to appear usually. In companies, this is often the end result of near-sightedness of management, worried only about their quarterly bonus.
  • SecuritySecurity by completeness
    The typical “very advanced security” position of many companies. This approach actually works quite well but only thanks to the fact that there are more companies in the above two categories. Completeness means the company extends the quality assurance by security relevant testing, design and code reviews, vulnerability testing and such things. In the end, one has to remember that correctness is not the same, and cannot be a guarantee of, security. When implemented correctly, can provide a really potent false feeling of security and serve as a shield against charges of incompetence and negligence.
  • Security by isolation
    An approach touted by many security and non-security relevant companies as the ultimate solution to security problems of today. The idea being that you run your application in an isolated environment and throw away the application together with the environment afterwards, or whenever you think you have a sight of a security problem. This way, security breaches are contained to a small disposable portion of software and do not cross over to the system at large. There are a few problems here, not the least one being the nurtured feeling of complacency and false security. Breaches can go in from the isolated environment to the system at large, the data is never completely thrown away, for why would you then compute that data in the first place, and so on. This is a  dead-end of false security.
  • Security by design
    This is a concept that is most unfamiliar to most of people. Typically, this is the case where the system is specifically designed to be secure. The environment is not taken for granted, malicious abuse is assumed, and care is taken to minimize the impact of the inevitable security breaches. Since this takes a lot of careful planning, thinking ahead, designing and verification, these products are always too late in the market and nearly never succeed. So we have no idea what it is like to use secure systems. Mainframes (that’s what “clouds” were called twenty years ago) were a bit like that, I feel…

So, what’s left then? Is there a practical approach to security that would not be so expensive that the companies would stick a finger to it but still provide a good solid security?

Quantitative analysis of faults shows that…

Not to worry, we are not going to get overly scientific here. I happened across this extremely interesting paper called “Quantitative analysis of faults and failures in a complex software system” published by Norman Fenton and Niclas Ohlsson in ye god old year 2000. The paper is very much worth a read, so if you have the patience I recommend you read it and make your own conclusions. For the impatient I present my own conclusions that I draw from reading the paper.

The gentlemen have done a pretty interesting piece of research that coincides well with my own observations of software development in various companies and countries. They worked with a large software base of a large company to investigate a couple of pretty simple theorems that most people take for granted. The research is about general software faults but the security faults are also software faults so this is all relevant anyway.

First, their object of investigation concerned the relationship between the number of faults in the modules of the software system and the size of the modules. It turns out that the software faults are concentrated in a few modules and not scattered uniformly throughout the system as one may have expected. That coincides very well with the idea that the developers are of different quality and experience and the modules written by different people will feature different levels of code quality.

Then, the finding that confirms my experience but contradicts what I hear quite often from managers and coders alike at all levels: the complexity of the code does not have any relation to the number of faults in that module. The more complex (and larger) code does not automatically beget more faults. It is again down to the people who wrote the code whether the code is going to be higher or lower in quality.

And then we come to a very interesting investigation. Apparently, there is strong evidence that (a) software written in similar environments will have similar quality and (b) the software quality does not improve with the time. You see, the developers do not become better at it. If they sucked at the beginning, they still suck ten years later. If they were brilliant to start with, you will get great code from day one. I am exaggerating but basically that is how it works. Great stuff, right?

So, the summary of the story is that if you want to have good code – get good developers. There is simply no other way. Good developers will handle high complexity and keep the good work, bad (and cheap) developers will not and will not learn. And no amount of tools will rectify that. End of the story.

Cryptography: just do not!

Software developers regularly attempt to create new encryption and hashing algorithms, usually to speed up things. There is only one answer one can give in this respect:

What part of "NO" don't you understand?

Here is a short summary of reasons why you should never meddle in cryptography.

  1. Cryptography is mathematics, very advanced mathematics
  2. There are only a few good cryptographers and cryptanalysts and even they get it wrong most of the time
  3. If you are not one of them, never, ever, ever try to write your own cryptographic routines
  4. Cryptography is a very delicate matter, worse than bomb defusing
  5. Consequently you must know that most usual “cryptographic” functions are not
  6. Even when it is good, cryptography is too easy to abuse without knowing it
  7. Bad cryptography looks the same as good cryptography. You will not know whether cryptography is broken until it is too late

So, I hope you are sufficiently convinced not to create your own cryptographic algorithms and functions. But we still have to use the cryptographic functions and that is no picknick either. What can mere mortals do to keep themselves on the safe side?

Additional information:

Posts navigation

1 2