On the utility of technical security

It is often said that the system is only as strong as the weakest link. When you have good security and strong passwords, the weakest link will be the human. As has always been. Think of how the system can be recovered from a breach when the problem is not technical but human.

[youtube=http://youtu.be/W50L4UPfWsg]

Security by …

We know several common buzzwords for determining security strategy of a company (or an individual). Let’s try to define them once again, for completeness sake.

  • Security by ignorance
    Easily summed up by “what you do not know cannot hurt you” and is obviously wrong. Typically happens at the early stages of software developer’s career when a person is uneducated about security and simply does not know any better. Blissful ignorance usually ends rather abruptly with a cold shower of security education or a security breach.
  • Security by obscurity
    The typical position of most software companies, hiding the secrets somewhere they themselves would not find oblivious to the fact that thieves typically know very well where you stash your money and jewelry. This is an actively negative position asking for trouble that does not take too long to appear usually. In companies, this is often the end result of near-sightedness of management, worried only about their quarterly bonus.
  • SecuritySecurity by completeness
    The typical “very advanced security” position of many companies. This approach actually works quite well but only thanks to the fact that there are more companies in the above two categories. Completeness means the company extends the quality assurance by security relevant testing, design and code reviews, vulnerability testing and such things. In the end, one has to remember that correctness is not the same, and cannot be a guarantee of, security. When implemented correctly, can provide a really potent false feeling of security and serve as a shield against charges of incompetence and negligence.
  • Security by isolation
    An approach touted by many security and non-security relevant companies as the ultimate solution to security problems of today. The idea being that you run your application in an isolated environment and throw away the application together with the environment afterwards, or whenever you think you have a sight of a security problem. This way, security breaches are contained to a small disposable portion of software and do not cross over to the system at large. There are a few problems here, not the least one being the nurtured feeling of complacency and false security. Breaches can go in from the isolated environment to the system at large, the data is never completely thrown away, for why would you then compute that data in the first place, and so on. This is a  dead-end of false security.
  • Security by design
    This is a concept that is most unfamiliar to most of people. Typically, this is the case where the system is specifically designed to be secure. The environment is not taken for granted, malicious abuse is assumed, and care is taken to minimize the impact of the inevitable security breaches. Since this takes a lot of careful planning, thinking ahead, designing and verification, these products are always too late in the market and nearly never succeed. So we have no idea what it is like to use secure systems. Mainframes (that’s what “clouds” were called twenty years ago) were a bit like that, I feel…

So, what’s left then? Is there a practical approach to security that would not be so expensive that the companies would stick a finger to it but still provide a good solid security?

Common passwords blacklist

Any system that implements password authentication must check whether the passwords are not too common. Every system faces the brute-force attacks that try one or another list of most common password (and usually succeed, by the way). The system must have a capability to slow down an attacker by any means available: slowing down system response every time an unsuccessful authentication is detected, blocking an account for a short time after a number of unsuccessful authentication attempts or throwing up captchas.

Your password is not long enoughHowever, even the most sophisticated system fails if the user’s password is the most common word: “password”. The attacker simply succeeds then at once because that is likely to be the first word tried. So we need a system for blacklisting passwords that are thought of as most likely to be tried in a dictionary brute-force attack. This may be annoying for users of the system who may prefer to use a simple word as a password but this is the reality – any simple word used as a password is likely to be a security hole and must be banned.

While implementing the user login plugin for CakePHP I came across this simple question. Where do we get the password lists to check the newly entered passwords against? And here is a resource I can recommend: 62K Common Passwords by InfoSec Daily. Depending on your system’s speed you could use a smaller file of 6 MB, a 1.5 GB file that should take care of most common passwords or fuse the files into your own list.

Hack NFC Door Locks

I can see in the logs that people sometimes come to this site with interesting searches. A recent interesting search was “Hack NFC Door Locks”. Well, since there is interest in the subject, why not? Let’s talk about NFC, contactless smart card and RFID door locks, shall we not?

Universal contactless smart card reader symbol

The actual technology used for the wireless door lock does not really matter all that much. Except, perhaps, when RFID is used because it only can support the simplest read-only schemes. But all in due time. What matters really is how the wireless technology is used. As it is usual in security, the technology could be used to create a fairly secure system or quite the same technology could be used to create a system that looks like a Swiss cheese to an attacker. The devil is in the details.

The simplest way to use any wireless technology to open a lock is to receive some kind of an identifier from the device, compare it to the stored database and take the security decision based on that. All of the RFID, contactless smart cards, and NFC send out a unique identifier when they connect to the reader. That identifier is used in many systems to actually take the security decision and, for example, open the door. The problem with this scheme is, of course, that there is no authentication at all. The identification phase is there but the authentication phase is absent. This is quite similar to asking a user name but not requiring a password.

Sounds silly but there are many systems that actually work like that. They rely on the assumption that copying a smart card is hard. Well, copying a smart card is reasonably hard compared to typing in a password although can be done fairly easily too but that is not the question. An attacker does not need to copy a smart card. The only thing that has to be done is to listen to the wireless communication with a good antenna from around the corner, record it and then play it back.

Another common alternative is to store a unique identifier into the smart card (or NFC device) and then request the card to send back that identifier. This makes for a longer conversation between the reader and the card compared to the case above and depending on the protocol the attacker may need to do more work. However, since the communication is not encrypted, it can be reverse-engineered easily and the attacker can still listen to the conversation, record all the required pieces of data and then communicate with the reader using his computer and an antennae.

Contactless smart cards and NFC devices have an option to use authenticated and encrypted communication. That is what typically used by the transport fare and banking systems. Those are hard to break. I spent countless hours analyzing and hardening those systems and I know that if it is implemented correctly an attacker will be as likely to break it as any other well implemented system using strong encryption – not very.

Of course, there still can be mistakes in the protocol, leaks that allow to guess some properties and secret material, special hardware attacks… but it is easier to attack the banking terminals with skimming devices than smart cards with hardware attacks. And door locks are more likely to implement the simplest of schemes than anything else…

Quantitative analysis of faults shows that…

Not to worry, we are not going to get overly scientific here. I happened across this extremely interesting paper called “Quantitative analysis of faults and failures in a complex software system” published by Norman Fenton and Niclas Ohlsson in ye god old year 2000. The paper is very much worth a read, so if you have the patience I recommend you read it and make your own conclusions. For the impatient I present my own conclusions that I draw from reading the paper.

The gentlemen have done a pretty interesting piece of research that coincides well with my own observations of software development in various companies and countries. They worked with a large software base of a large company to investigate a couple of pretty simple theorems that most people take for granted. The research is about general software faults but the security faults are also software faults so this is all relevant anyway.

First, their object of investigation concerned the relationship between the number of faults in the modules of the software system and the size of the modules. It turns out that the software faults are concentrated in a few modules and not scattered uniformly throughout the system as one may have expected. That coincides very well with the idea that the developers are of different quality and experience and the modules written by different people will feature different levels of code quality.

Then, the finding that confirms my experience but contradicts what I hear quite often from managers and coders alike at all levels: the complexity of the code does not have any relation to the number of faults in that module. The more complex (and larger) code does not automatically beget more faults. It is again down to the people who wrote the code whether the code is going to be higher or lower in quality.

And then we come to a very interesting investigation. Apparently, there is strong evidence that (a) software written in similar environments will have similar quality and (b) the software quality does not improve with the time. You see, the developers do not become better at it. If they sucked at the beginning, they still suck ten years later. If they were brilliant to start with, you will get great code from day one. I am exaggerating but basically that is how it works. Great stuff, right?

So, the summary of the story is that if you want to have good code – get good developers. There is simply no other way. Good developers will handle high complexity and keep the good work, bad (and cheap) developers will not and will not learn. And no amount of tools will rectify that. End of the story.

Cryptography: just do not!

Software developers regularly attempt to create new encryption and hashing algorithms, usually to speed up things. There is only one answer one can give in this respect:

What part of "NO" don't you understand?

Here is a short summary of reasons why you should never meddle in cryptography.

  1. Cryptography is mathematics, very advanced mathematics
  2. There are only a few good cryptographers and cryptanalysts and even they get it wrong most of the time
  3. If you are not one of them, never, ever, ever try to write your own cryptographic routines
  4. Cryptography is a very delicate matter, worse than bomb defusing
  5. Consequently you must know that most usual “cryptographic” functions are not
  6. Even when it is good, cryptography is too easy to abuse without knowing it
  7. Bad cryptography looks the same as good cryptography. You will not know whether cryptography is broken until it is too late

So, I hope you are sufficiently convinced not to create your own cryptographic algorithms and functions. But we still have to use the cryptographic functions and that is no picknick either. What can mere mortals do to keep themselves on the safe side?

Additional information:

Supply chain: Huawei and ZTE

United States House of Representatives Seal

US House of Representatives published an interesting report about their concerns with Huawei and ZTE, large Chinese telecom equipment providers. The report states openly that there are concerns that the equipment, parts and software may be manipulated by the Chinese government agencies, or on their behalf, in order to conduct military, state and business intelligence. The investigation that the report is the outcome of did not dispel those concerns but made them more founded, if anything. We have to keep in mind that this is a highly political issue, of course. But even then, citing such concerns underlines what we talked about for several years now: the supply chain is a really important part of your product’s security and blindly outsourcing things anywhere is a security risk.

The Elderwood Report

Symantec reports very interesting findings in their report of the so-called “Elderwood Project”. A highly interesting paper that I can recommend as bedside reading. Here is a teaser:

In 2009, Google was attacked by a group using the Hydraq (Aurora) Trojan horse. Symantec has monitored this group’s activities for the last three years as they have consistently targeted a number of industries. Interesting highlights in their method of operations include: the use of seemingly an unlimited number of zero-day exploits, attacks on supply chain manufacturers who service the target organization, and a shift to “watering hole” attacks (compromising certain websites likely to be visited by the target organization). The targeted industry sectors include, but are not restricted to; defense, various defense supply chain manufacturers, human rights and non-governmental organizations (NGOs), and IT service providers.

SHA-3 is there!

NIST has announced the end of the Secure Hash Algorithm competition the day before yesterday, naming Keccak as the winner and making it the SHA-3 algorithm. The complete announcement from NIST is here.

One thing of note is that since the algorithm was developed by STMicroelectronics and NXP Semiconductors, the algorithm is heavily optimized for the use in smart cards. According to the announcements, it is both compact and fast when implemented in hardware. Which makes it once again very well suited to some applications and difficult to use for others (like password hashing).

IEEE should be embarrassed

The world’s largest professional association for the advancement of technology” has been thoroughly embarrassed in an accident where they left their log files containing user names and passwords open for FTP access to all on the Net for more than a month, according to a DarkReading report. Or, at least, I think they should be embarrassed although they do not seem to be very.

The data for at least 100 000 members were exposed and IEEE took care to close the access. However, having access to the log files is not what I think they should be embarrassed about. As the things go, mistakes in configuration happen and files may become exposed. That’s just life.

However, what is really troublesome is that IEEE, the “world’s largest professional association for the advancement of technology” (according to themselves), has logged the usernames together with passwords in plaintext. I mean, we know that’s bad, and that’s been bad for at least a couple of decades. They are definitely at least a couple of decades behind on good security practices. I think that’s really embarrassing.

Posts navigation

1 2 3 4 5 6 7 8 9