On the utility of technical security

It is often said that the system is only as strong as the weakest link. When you have good security and strong passwords, the weakest link will be the human. As has always been. Think of how the system can be recovered from a breach when the problem is not technical but human.

[youtube=http://youtu.be/W50L4UPfWsg]

Security by …

We know several common buzzwords for determining security strategy of a company (or an individual). Let’s try to define them once again, for completeness sake.

  • Security by ignorance
    Easily summed up by “what you do not know cannot hurt you” and is obviously wrong. Typically happens at the early stages of software developer’s career when a person is uneducated about security and simply does not know any better. Blissful ignorance usually ends rather abruptly with a cold shower of security education or a security breach.
  • Security by obscurity
    The typical position of most software companies, hiding the secrets somewhere they themselves would not find oblivious to the fact that thieves typically know very well where you stash your money and jewelry. This is an actively negative position asking for trouble that does not take too long to appear usually. In companies, this is often the end result of near-sightedness of management, worried only about their quarterly bonus.
  • SecuritySecurity by completeness
    The typical “very advanced security” position of many companies. This approach actually works quite well but only thanks to the fact that there are more companies in the above two categories. Completeness means the company extends the quality assurance by security relevant testing, design and code reviews, vulnerability testing and such things. In the end, one has to remember that correctness is not the same, and cannot be a guarantee of, security. When implemented correctly, can provide a really potent false feeling of security and serve as a shield against charges of incompetence and negligence.
  • Security by isolation
    An approach touted by many security and non-security relevant companies as the ultimate solution to security problems of today. The idea being that you run your application in an isolated environment and throw away the application together with the environment afterwards, or whenever you think you have a sight of a security problem. This way, security breaches are contained to a small disposable portion of software and do not cross over to the system at large. There are a few problems here, not the least one being the nurtured feeling of complacency and false security. Breaches can go in from the isolated environment to the system at large, the data is never completely thrown away, for why would you then compute that data in the first place, and so on. This is a  dead-end of false security.
  • Security by design
    This is a concept that is most unfamiliar to most of people. Typically, this is the case where the system is specifically designed to be secure. The environment is not taken for granted, malicious abuse is assumed, and care is taken to minimize the impact of the inevitable security breaches. Since this takes a lot of careful planning, thinking ahead, designing and verification, these products are always too late in the market and nearly never succeed. So we have no idea what it is like to use secure systems. Mainframes (that’s what “clouds” were called twenty years ago) were a bit like that, I feel…

So, what’s left then? Is there a practical approach to security that would not be so expensive that the companies would stick a finger to it but still provide a good solid security?

Quantitative analysis of faults shows that…

Not to worry, we are not going to get overly scientific here. I happened across this extremely interesting paper called “Quantitative analysis of faults and failures in a complex software system” published by Norman Fenton and Niclas Ohlsson in ye god old year 2000. The paper is very much worth a read, so if you have the patience I recommend you read it and make your own conclusions. For the impatient I present my own conclusions that I draw from reading the paper.

The gentlemen have done a pretty interesting piece of research that coincides well with my own observations of software development in various companies and countries. They worked with a large software base of a large company to investigate a couple of pretty simple theorems that most people take for granted. The research is about general software faults but the security faults are also software faults so this is all relevant anyway.

First, their object of investigation concerned the relationship between the number of faults in the modules of the software system and the size of the modules. It turns out that the software faults are concentrated in a few modules and not scattered uniformly throughout the system as one may have expected. That coincides very well with the idea that the developers are of different quality and experience and the modules written by different people will feature different levels of code quality.

Then, the finding that confirms my experience but contradicts what I hear quite often from managers and coders alike at all levels: the complexity of the code does not have any relation to the number of faults in that module. The more complex (and larger) code does not automatically beget more faults. It is again down to the people who wrote the code whether the code is going to be higher or lower in quality.

And then we come to a very interesting investigation. Apparently, there is strong evidence that (a) software written in similar environments will have similar quality and (b) the software quality does not improve with the time. You see, the developers do not become better at it. If they sucked at the beginning, they still suck ten years later. If they were brilliant to start with, you will get great code from day one. I am exaggerating but basically that is how it works. Great stuff, right?

So, the summary of the story is that if you want to have good code – get good developers. There is simply no other way. Good developers will handle high complexity and keep the good work, bad (and cheap) developers will not and will not learn. And no amount of tools will rectify that. End of the story.

Supply chain: Huawei and ZTE

United States House of Representatives Seal

US House of Representatives published an interesting report about their concerns with Huawei and ZTE, large Chinese telecom equipment providers. The report states openly that there are concerns that the equipment, parts and software may be manipulated by the Chinese government agencies, or on their behalf, in order to conduct military, state and business intelligence. The investigation that the report is the outcome of did not dispel those concerns but made them more founded, if anything. We have to keep in mind that this is a highly political issue, of course. But even then, citing such concerns underlines what we talked about for several years now: the supply chain is a really important part of your product’s security and blindly outsourcing things anywhere is a security risk.

The Elderwood Report

Symantec reports very interesting findings in their report of the so-called “Elderwood Project”. A highly interesting paper that I can recommend as bedside reading. Here is a teaser:

In 2009, Google was attacked by a group using the Hydraq (Aurora) Trojan horse. Symantec has monitored this group’s activities for the last three years as they have consistently targeted a number of industries. Interesting highlights in their method of operations include: the use of seemingly an unlimited number of zero-day exploits, attacks on supply chain manufacturers who service the target organization, and a shift to “watering hole” attacks (compromising certain websites likely to be visited by the target organization). The targeted industry sectors include, but are not restricted to; defense, various defense supply chain manufacturers, human rights and non-governmental organizations (NGOs), and IT service providers.

IEEE should be embarrassed

The world’s largest professional association for the advancement of technology” has been thoroughly embarrassed in an accident where they left their log files containing user names and passwords open for FTP access to all on the Net for more than a month, according to a DarkReading report. Or, at least, I think they should be embarrassed although they do not seem to be very.

The data for at least 100 000 members were exposed and IEEE took care to close the access. However, having access to the log files is not what I think they should be embarrassed about. As the things go, mistakes in configuration happen and files may become exposed. That’s just life.

However, what is really troublesome is that IEEE, the “world’s largest professional association for the advancement of technology” (according to themselves), has logged the usernames together with passwords in plaintext. I mean, we know that’s bad, and that’s been bad for at least a couple of decades. They are definitely at least a couple of decades behind on good security practices. I think that’s really embarrassing.

More e-mail addresses stolen

DropBox - kitebox

According to an article in Digital Trends, Dropbox leaked an unknown number of passwords. The interesting part here is that they claim an attacker had access to an employee’s account where a list of e-mail addresses was found. Dropbox is not making the news for the first time and this time they promise tougher security measures.

Unfortunately, I do not think tougher security measures they propose would alleviate the problem of employees keeping lists of accounts in their dropboxes.

Philosophy of door locks

When working on security, there is something extremely important to keep in mind at all times. We are not trying to make systems impenetrable. We are trying to make it real, real hard for the attacker, that’s all.

Security guards everywhere

If an attacker has physical access to your system, you lost. All measures, passwords, firewalls, everything is there to deter an attacker that is attacking remotely. But the only thing that actually stands between your system and a determined attacker is your door lock. Never thought of that, did you? The security of your computer at home is only as good as your door lock.

Yes, there are smart cards that are physically secure computers. But their application is limited and most if the time we have to deal with systems that we protect in the “virtual world” while in the real world they are basically defenseless. So we make it harder for the attackers with door locks, security guards and CCTV cameras.

Again, we are just making it harder, not impossible. Impossible would be impossible, not to mention prohibitively expensive. Given that an attack is always possible and there are many venues of attack, the attacker will always tend to choose a path that is most economical – the cheapest way to break into your system.

My task as I see it is to convince you to use such security measures that it becomes cheaper for the attacker to break into your house than to attack your computer through the software. Once we are at that point, you start looking into the well-understood world of physical security and my task is done. But we are far from there.

Why bother?

Hmm… Good question… Well, let’s get this straightened out before we jump into other interesting subjects. Every single website and application, every single computer system gets broken into. For fun, money, fame, accidentally. This is just the way it is and I have to accept this as the current reality. I may not like it but who cares about that?

Whether you are a large corporation or a student writing the first website, your system will get broken into. If your system has been around for a while, it was already broken into. My not-so-extremely-popular website was broken into already three times (that I know of) and I am not ashamed to admit it. Denial is futile. Take it as inevitable.

There is even a line of thought nowadays with some of the security people that we should not bother to concentrate so much on trying to protect things for we can’t prevent break-ins anyway. They say we should concentrate on detecting and containing the damage from break-ins. Ah, bollocks. We have to do both. Do not give up your defenses just because you know they will be eventually breached. But be prepared.

What I really want to say is that when you make a computer system, be it a website, corporate network, smart card or anything else, you have no choice. Thinking that security is somebody else’s problem is extremely common, second only to not thinking about security at all, and usually disastrous in a not-so-distant future. Don’t be like that. Come to the good side, protect your system, think of security long and hard, apply the Hash and the Crypto the Right Way™ and your system will run happily ever after (well, at least to the next major breakthrough in cryptography or something).

Posts navigation

1 2 3