A company with an SQL injection name

Finally, someone registered a company that is an SQL injection attack. We saw the license plates on cars doctored to execute SQL injection attacks but this is the first time, I think, that an attempt to crash all business SQL databases in a country is made.

The company name is: ; DROP TABLE “COMPANIES”;– LTD

The registration record: https://beta.companieshouse.gov.uk/company/10542519

XKCD cartoon “Exploits of a mom”

Don’t patch it, it’s fine?

I wrote back in 2013 about my shock at discovering that the companies are now publicly calling to stop the investment in security and avoid fixing security bugs in my article Brainwashing in security. There, we witnessed the head of Adobe security, Brad Arkin, tell us that the companies should not be wasting their precious resources on “fixing every little bug”, agreeing to the comment made by another participant, John Viega from SilverSky, that:

“For most companies it’s going to be far cheaper and serve their customers a lot better if they don’t do anything [about security bugs] until something happens.”

All right, fast forward three years and Adobe becomes a showcase. Here is what Google senior security engineer Darren Bilby, speaking at the Kiwicon, has to tell us about the security of the contemporary software:

“We are giving people systems that are not safe for the internet and we are blaming the user,” Bilby says.

He illustrated his point by referring to the 314 remote code execution holes disclosed in Adobe Flash last year alone, saying the strategy to patch those holes is like a car yard which sells vehicles that catch on fire every other week.

The security strategy at Adobe is clearly paying its dividends. Way to go, Adobe, way to go…


Data breach at LinkedIn

linkedin-default-shareApparently, there was a serious data breach at LinkedIn and many customer records were stolen including “member email addresses, hashed passwords, and LinkedIn member IDs”. LinkedIn sent out a notification informing that the passwords were invalidated. What is interesting in the note is that they included a cryptic note that the break-in was “not new”. What could they mean by that?

On May 17, 2016, we became aware that data stolen from LinkedIn in 2012 was being made available online. This was not a new security breach or hack. We took immediate steps to invalidate the passwords of all LinkedIn accounts that we believed might be at risk. These were accounts created prior to the 2012 breach that had not reset their passwords since that breach.

I can take a wild guess that they passwords prior to 2012 were stored either unencrypted, without salt, or using some very weak algorithm. The security breach itself was, of  course, “new” but the only information at risk are those passwords in the database that were stored in this old-fashioned way.

So, according to my wild guess, there must be more information stolen than they tell us but LinkedIn judged that the only information that threatens themselves were those old passwords so they finally invalidated them (what they should have done back in 2012) and told us they are happy with it.Unfortunately, there is no way to know for sure.

You can make your own wild guess at what happened.

Position yourself on Security Maturity Grid

I wrote up the Security Maturity Grid the way quality management is usually presented. The grid is a simple 5 x 6 matrix that shows different stages of maturity of the company’s security management against six different security management categories (management understanding of security, problem handling, cost of security, etc). The lowest stage of maturity is called ‘Uncertainty’ – the organisation is inexperienced, security management is a low priority and reactive, etc – then as security management matures it goes through the stages of ‘Awakening’, ‘Enlightenment’, ‘Wisdom’, then the highest level, ‘Certainty’. Each point – maturity versus category – on the grid has a brief description of how that combination appears in the company.

I keep the grid on a separate page, Security Maturity Grid, so have a look and try to position yourself or your company on the grid. Then wait for the software security goons to show up :)


Worst languages for software security

I was sent an article about program languages that generate most security bugs in software today. The article seemed to refer to a report by Veracode, a company I know well, to discuss what software security problems are out there in applications written in different languages. That is an excellent question and a very interesting subject for a discussion. Except that the article really failed to discuss anything, making instead misleading and incoherent statements about both old-school lnguages like C/C++ and the PHP scripting. I fear we will have to look into this problem ourselves then instead.

So, what languages are the worst when it comes to software security? Are they the old C and C++, like so many proponents of Java would like us to believe? Or are they the new and quickly developing languages with little enforcement of structure, like PHP? Let’s go to Veracode and get their report: “State of Software Security. Focus on Application Development. Supplement to Volume 6.

The report includes a very informative diagram showing what percentage of applications passes the OWASP policy for a secure application out of the box grouped by the language of the application. OWASP policy is defined as “not containing any of the security problems mentioned on the OWASP Top 10 most important vulnerabilities for web application” and OWASP is the accepted industry authority on web application security. So if they say that something is a serious vulnerability, you can be sure it is. Let’s look at the diagram:

Veracode OWASP by language 2016-01-18-01

Fully 60% of applications written in C/C++ come without those most severe software security vulnerabilities listed by OWASP. That is a very good result and a notable achievement. Next, down one and a half to two times, come the three mobile platforms. And the next actual programming language, .NET, comes out more than two times as bad! Java is 2 and a half times as bad as C/C++. The scripting languages are three times as bad.

Think about it. Applications written in Java are almost three times as likely to contain security vulnerabilities as those written in C/C++. And C/C++ is the only language that gives you a more than 50% chance of not having serious security vulnerabilities in your application.

Why is that?

The reasons are many. For one thing, Java has never delivered on its promises of security, stability and uniformity. People must struggle with issues that have been long resolved in other languages, like the idiotic memory management and garbage collection, reinventing the wheel on any more or less non-trivial piece of software. The language claims to be “easy” and “fool-proof” while letting people to compare string objects instead of strings with an equal operator unknowingly. The discrepancy between the fantasy and reality is huge in the Java world and getting worse all the time.

Still, the main reason, I think, is the quality of the developer: both the level of developer knowledge, expertise, as it were, and the sheer carelessness of the Java programmers. Where C/C++ developers are actually masters of the software development, the Java developers are most of the time just coders. That makes a difference. People learn Java in all sorts of courses or by themselves – companies constantly hire Java developers, so it makes sense to follow the market demand. Except that those people are kids with an ad-hoc knowledge of a programming language and absolutely no concept of software engineering. As opposed to that, most C/C++ people are actually engineers and they know much better what they are doing, even when they write things in a different language. But the “coders” are much cheaper than real engineers, so the companies developing in Java end up with lots of those and the software quality goes down the drain.

The difference in the quality of the software is easily apparent when you compare the diagrams for types of the issues detected mostly from the same report:

Veracode Problem Areas 2016-01-18

You can see that code quality problems are only 27% of the total number of issues detected in the case of C/C++ while for Java code the code quality issues represent the whopping 80% of total.

Think again. The code written in Java has several time worse quality than the code written in C/C++.

It is not surprising that the quality problems result in security vulnerabilities. Both quality and security go hand in hand and require discipline and knowledge on the part of developer. Where one suffers, the other inevitably does as well.

The conclusion: if you want secure software, you want C/C++. You definitely do not want Java. And even if you are stuck with Java, you still want to have C/C++ developers to write your Java code because they are more likely to write better and more secure software.

Backdoors in encryption products

padlock-security-protection-hacking-540x334After the recent terrorist attacks the governments are again pushing for more surveillance and the old debate on the necessity of the backdoors in encryption software raises its ugly head again. Leaving the surveillance question aside, let’s see, what does it mean to introduce backdoors to programs and how they can be harmful, especially when we are talking security and encryption?

Generally, a backdoor is an additional interface to a program that is not documented, its existence is kept secret and used for purposes other than the main function of the program. Quite often, a backdoor is simply a testing interface that the developers use to run special commands and perform tasks that normal users would not need to. Such testing backdoors are also often left in the production code, sometimes completely unprotected, sometimes protected with a fixed password stored in the code of the program where it is easy to find, i.e. also unprotected. Testing backdoors may or may not be useful to an attacker depending on the kind of functionality they provide.

Sometimes the backdoors are introduced with an explicit task of gaining access to the program surreptitiously. These are often very powerful tools that allow full access to all functionality of the program and sometimes add other functions that are not even available at the regular user interface. When talking about security and encryption products, such backdoors could allow unauthorized access, impersonation of other users, man-in-the-middle attacks, collection of keys, passwords and other useful information among other things.

The idea of the proponents of introducing backdoors into security and encryption software is that we could introduce such backdoors to the encryption and other tools used by general public. Then, the access to those backdoors would only be available to the police, justice department, secret services, immigration control and drug enforcement agencies… did I miss any? Maybe a few more agencies would be on the list but they are all well behaved, properly computer security trained and completely legal users. And that access would allow them to spy on the people using the tools in case those people turn out to be terrorists or something. Then the backdoors would come in really handy to collect the evidence against the bad guys and perhaps even prevent an explosion or two.

2015-07-19-image-5The problem with this reasoning is that it assumes too much. The assumptions include:

  1. The existence and the access to the backdoors will not be known to the “bad guys”. As the practice shows, the general public and the criminal society contain highly skilled people who can find those backdoors and publish (or sell) them for others to use. Throughout the computer history every single backdoor was eventually found and publicized. Why would it be different this time?
  2. The “bad guys” will actually use the software containing the backdoors. That’s a big assumption, isn’t it? If those guys are clever enough to use encryption and other security software, why would they use something suspicious? They would go for tools that are well known to contain no such loopholes, wouldn’t they?
  3. The surveillance of everyone is acceptable as long as sometimes one of the people under surveillance is correctly determined to be a criminal. This whole preceding sentence is by itself the subject of many a fiction story and movie, “Minority Report” as an example comes to mind. The book “Tactical Crime Analysis: Research and Investigation” might be a good discussion of problems of predicting crime in repeat offenders, now try applying that to first-time offenders – you get literally random results. Couple that with the potential for abuse of collected surveillance data… I don’t really even want to think about it.

So we would en up, among other things, with systems that can be abused by the very “bad guys” that we are trying to catch while they use other, trustworthy, software and the surveillance results on the general population are wide open to abuse as well. I hope this is sufficiently clear now.

Whenever you think of “backdoors”, your knee-jerk reaction should be “remove them”. Even for testing, they are too dangerous. If you introduce them in the software on purpose… pity the fool.

CAST Workshop “Secure Software Development”

7033818-3d-abbild-monster-mit-investigate-linseWe are organizing the workshop on “Secure Software Development” now for the third year in a row. As usual, the workshop is in Darmstadt and the logistics is cared for by the CAST e.V. The date for the workshop is 12 November.

This year most presentations seem to be in German, so probably it does not make much sense for non-German speaking people. But if you speak German, we have some rather interesting subjects like our experiences with vulnerability management, research into sociotechnical basis of development security and problems with developing the mobile payment infrastructure security.

The workshop is a great place for discussions and meeting various people working on security in software development. Please, come and join us on 12 November!

Windows 10: catching up to Google?

windows-10-is-spying-on-every-user-but-theres-a-way-outWindows 10 has turned out to be a very interesting update to the popular desktop operating system. Apparently, Microsoft envies Google for their success in spying on everyone and their dog through the Internet. Accordingly, Microsoft could not resist turning Windows into a mean spying machine. People were mightily surprised when all of the new spying features of Windows started to get uncovered.

To start with, the EULA, the license agreement, actually states clearly that Microsoft will collect the history of browsing, WiFi access point names and passwords, and website passwords. All of this information will be stored in the “user’s” Microsoft account, i.e. on the servers of Microsoft. Every user will receive a unique identification number that will be available to third parties for targeted advertisement.

When you use BitLocker for disk encryption, the key will be also stored at Microsoft! The license agreement states that the password will be copied automatically to OneDrive servers. I told you that going with BitLocker was not something a sane person would do, didn’t I?

And now all of that personal data can be used by Microsoft at will:

We will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary to protect our customers or enforce the terms governing the use of the services.

See, it’s not just in case that a court issues an order, but simply whenever Microsoft thinks that they need to.

Some observers report that the license also reserves the rights for Microsoft to disconnect “unlicensed hardware”. I did not find that part in the EULA though, I don’t know if it is true. I found something else though. Windows 10 will also remove your anti-virus or other anti-malware protection: “other antimalware software will be disabled or may have to be removed”.

That’s the part about EULA. There is also Cortana, the virtual assistant, and various parts of the OS that submit various information to Microsoft. Well, Cortana can be disabled. However, it turns out that even disabling every single thing that reports user information to Microsoft does not help – Windows 10 still reports a lot of things, now without even informing the user. Apparently, the user cannot switch off all of the monitoring.

One of the things that cannot be switched off is a built-in keylogger. The keystrokes are recorded in a temporary file and then submitted to Microsoft servers. Keylogger is active even when you are not logged into the Microsoft account.

Another thing is the microphone and camera. Whenever the microphone is on, it records the sound and transmits it to the servers of the company. The same happens to the video camera, the video is recorded automatically and the first 35 MB are sent over to Microsoft.

Microsoft explains that all of this is necessary to create a database of users, so that the targeted advertisement can be sold to third parties. However, these are obvious privacy violations and some of them are even performed without informing the user.

Microsoft has also announced that some of the features of the Windows 10 will be backported to the previous versions of Windows. So we can expect soon the updates for previous versions that will introduce these spying features across all of Windows computers.

Continue the TrueCrypt discussion: Windows 10

I already pointed out previously that I do not see any alternative to the TrueCrypt for encrypting data on disk. TrueCrypt is the only tool that we can more or less trust so far. You will probably remember that Bruce Schneier recommended to use Windows encryption, the BitLocker, instead of TrueCrypt and I called that idea nonsense. To prove me right, here comes the Windows 10 End User License Agreement (EULA) that states explicitly Microsoft will retain the keys to the encryption.

windows-10-is-spying-on-every-user-but-theres-a-way-outThis is rather amazing but, indeed, if you used the BitLocker to encrypt the data on disk, the key will be copied by Microsoft to the OneDrive servers. Of course, that makes the encryption quite pointless as the OneDrive servers are controlled by Microsoft and they will give the key to government authorities and intelligence agencies.

Moreover, Microsoft actually reserves the right to do anything they want with all your data, which by definition includes your keys and the data protected by the encryption:

We will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary to protect our customers or enforce the terms governing the use of the services.

So, really, all of your information is not only accessible to the government and intelligence agencies but even the company itself will access and manipulate your data whenever they believe it “necessary”.

Yes, TrueCrypt remains the only tool for disk encryption on Windows and you cannot, in good faith, claim that BitLocker is a good substitute for it. And, really, go Linux already.


truecryptSince the anonymous team behind TrueCrypt has left the building, security aware people were left wondering what’s next. I personally keep using TrueCrypt and as long as it works I will keep recommending it.

Recently, Bruce Schneier has raised a few red flags by his strange advice that seems to indicate that he is being paid now for his “services to the community” by parties not so interested in keeping the community secure. One more thing is his advice to switch from TrueCrypt to BitLocker.

The guys that “disappeared” from behind TrueCrypt recommended to switch to BitLocker and that makes BitLocker suspect right away. Moreover, anyone working in security would be right suspecting that BitLocker, coming from Microsoft, would be backdoor-ed. And now Bruce Schneier is coming out and saying that he recommends BitLocker now instead of TrueCrypt? Great. I am not going to trust either.

TrueCrypt for the moment remains the only trustworthy application for disk encryption. There is an effort to make TrueCrypt survive and support newer features of the file systems. I hope it works and we still have some tool to trust in five years from now.

I have also stored the recent versions of TrueCrypt.

Posts navigation

1 2 3 4 7 8 9