Worst languages for software security

I was sent an article about program languages that generate most security bugs in software today. The article seemed to refer to a report by Veracode, a company I know well, to discuss what software security problems are out there in applications written in different languages. That is an excellent question and a very interesting subject for a discussion. Except that the article really failed to discuss anything, making instead misleading and incoherent statements about both old-school lnguages like C/C++ and the PHP scripting. I fear we will have to look into this problem ourselves then instead.

So, what languages are the worst when it comes to software security? Are they the old C and C++, like so many proponents of Java would like us to believe? Or are they the new and quickly developing languages with little enforcement of structure, like PHP? Let’s go to Veracode and get their report: “State of Software Security. Focus on Application Development. Supplement to Volume 6.

The report includes a very informative diagram showing what percentage of applications passes the OWASP policy for a secure application out of the box grouped by the language of the application. OWASP policy is defined as “not containing any of the security problems mentioned on the OWASP Top 10 most important vulnerabilities for web application” and OWASP is the accepted industry authority on web application security. So if they say that something is a serious vulnerability, you can be sure it is. Let’s look at the diagram:

Veracode OWASP by language 2016-01-18-01

Fully 60% of applications written in C/C++ come without those most severe software security vulnerabilities listed by OWASP. That is a very good result and a notable achievement. Next, down one and a half to two times, come the three mobile platforms. And the next actual programming language, .NET, comes out more than two times as bad! Java is 2 and a half times as bad as C/C++. The scripting languages are three times as bad.

Think about it. Applications written in Java are almost three times as likely to contain security vulnerabilities as those written in C/C++. And C/C++ is the only language that gives you a more than 50% chance of not having serious security vulnerabilities in your application.

Why is that?

The reasons are many. For one thing, Java has never delivered on its promises of security, stability and uniformity. People must struggle with issues that have been long resolved in other languages, like the idiotic memory management and garbage collection, reinventing the wheel on any more or less non-trivial piece of software. The language claims to be “easy” and “fool-proof” while letting people to compare string objects instead of strings with an equal operator unknowingly. The discrepancy between the fantasy and reality is huge in the Java world and getting worse all the time.

Still, the main reason, I think, is the quality of the developer: both the level of developer knowledge, expertise, as it were, and the sheer carelessness of the Java programmers. Where C/C++ developers are actually masters of the software development, the Java developers are most of the time just coders. That makes a difference. People learn Java in all sorts of courses or by themselves – companies constantly hire Java developers, so it makes sense to follow the market demand. Except that those people are kids with an ad-hoc knowledge of a programming language and absolutely no concept of software engineering. As opposed to that, most C/C++ people are actually engineers and they know much better what they are doing, even when they write things in a different language. But the “coders” are much cheaper than real engineers, so the companies developing in Java end up with lots of those and the software quality goes down the drain.

The difference in the quality of the software is easily apparent when you compare the diagrams for types of the issues detected mostly from the same report:

Veracode Problem Areas 2016-01-18

You can see that code quality problems are only 27% of the total number of issues detected in the case of C/C++ while for Java code the code quality issues represent the whopping 80% of total.

Think again. The code written in Java has several time worse quality than the code written in C/C++.

It is not surprising that the quality problems result in security vulnerabilities. Both quality and security go hand in hand and require discipline and knowledge on the part of developer. Where one suffers, the other inevitably does as well.

The conclusion: if you want secure software, you want C/C++. You definitely do not want Java. And even if you are stuck with Java, you still want to have C/C++ developers to write your Java code because they are more likely to write better and more secure software.

Passwords and other secrets in source code

key-under-matSecrets are bad. Secrets in source code are an order of magnitude worse.

Secrets are difficult to protect. Every attacker goes after the secrets and we must protect our secrets against all of them. The secrets are the valuable part of our software and that’s why they are bad – they represent an area of heightened risk.

What would a developer do when his piece of software needs to access a password protected server? That’s right, he will write the user name and the password into some constant and compile them into the code. If the developer wants to be clever and heard the word “security” before, he will base64 encode the password to prevent it from “being read by anyone”.

The bad news is, of course, that whoever goes through the code will be able to follow the algorithm and data through and recover the user name and password. Most of the software is available to anyone in its source form, so that is not a stretch to assume an attacker will have it as well. Moreover, with the current level of binary code scanning tools, they do not need the source code and do not need to do anything manually. The source and binary scanners pick out the user names and passwords easily, even when obscuring algorithms are used.

So, the password you store in the source code is readily available. It’s really like placing the key to your home under the doormat. It’s that obvious.

Now, you shipped that same code to every customer. That means that the same password works at every of those sites. Your customers and whoever else got the software in their hands can access all of the sites that have your software installed with the same password. And to top it off, you have no way of changing the password short of releasing a new version with the new password inside.

Interestingly, Facebook had this as one of their main messages to the attendees of the F8 Developers Conference: “Facebook security engineer Ted Reed offered security suggestions of a more technical nature. Reed recommended that conference attendees—particularly managers or executives that oversee software development—tell coders to remove any secret tokens or keys that may be lurking around in your company’s source code.”

Which means the story is far from over. Mainstream applications continue to embed the secrets into the source code defying the attempts to make our software world secure.

The thought of compiling the user names and passwords into the application should never cross your mind. If it does, throw it out. It’s one of those things you just don’t do.

House key versus user authentication

key_goldI got an interesting question regarding the technologies we use for authentication that I will discuss here. The gist of the question is that we try to go all out on the technologies we use for the authentication, even trying unsuitable technologies like biometrics, while, on the other hand, we still use fairly simple keys to open our house doors. Why is that? Why is the house secured with a simple key that could be photographed and copied and it seems sufficient nevertheless? Why then, for example, the biometrics is not enough as an authentication mechanism by comparison?

Ok, so let’s first look at the house key. The key is not really an identification or authentication device. It is an authorization device. The key says “I have the right to enter” without providing any identity whatsoever. So the whole principle is different here: whoever has the key has the authorization to enter. That’s why we protect the key and try not to give it to anyone – obtaining the key or obtaining a copy of the key is equivalent to obtaining an authorization to perform an action, like using the house.

Now, even if you do have a key, but you obtained it without the permission, that does not make you the owner of the place. You are still an intruder, so if someone happens to be around, they will identify you as an intruder and call the police who will be able to verify (authenticate) you as an intruder with an improperly obtained authorization (key). So we have deterrents in place that will provide additional layers of protection and we do not really need to go crazy on the keys themselves.

Should we have an authentication system compromised, however, the intruder would not be identified as such. On the contrary, he will be identified and authenticated as a proper legitimate user of the system with all the authorizations attached. That is definitely a problem – there is no further layer of protection in this case.

In the case of the house, passing an authentication would be equivalent to producing a passport and letting police verify you as the owner of the house, then breaking down the door for you because you lost your key. Well, actually, issuing you with a copy of the key, but you get the point. The false authentication runs deeper in the sense of the problems and damage it can cause than the authorization. With wrong authorization you can sometimes get false authentication credentials but not always. With improper authentication you always get improper authorization.

About the so-called “uncertainty principle of new technology”

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances. Namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and the academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead. Call it “Zenkoff Principle”.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

st-petersburg-open-bridge

P.S. Perhaps I should have written “quality” with a capital “Q” all over because it is not in the sense of “quality assurance” that I use the term but the inherent quality of everything called “arete” by Greeks that originates both form and substance of the new inventions.

Heartbleed? That’s nothing. Here comes Microsoft SChannel!

microsoft_securityThe lot of hype around the so-called “Heartbleed” vulnerability in open-source cryptographic library OpenSSL was not really justified. Yes, many servers were affected but the vulnerability was quickly patched and it was only an information disclosure vulnerability. It could not be used to break into the servers directly.

Now we have Microsoft Secure Channel library vulnerability (“SChannel attack”) that allows an attacker to easily own MS servers:

This security update resolves a privately reported vulnerability in the Microsoft Secure Channel (Schannel) security package in Windows. The vulnerability could allow remote code execution if an attacker sends specially crafted packets to a Windows server.

This vulnerability in Microsoft TLS is much more serious as it allows to take over the control of any vulnerable server remotely by basically simply sending packets with commands. Microsoft admits that there are no mitigating factors and no workarounds, meaning if you did not install the patch, your server is defenseless against the attack. Windows Server 2012, Windows Server 2008 R2 and Windows Server 2003, as well as workstations running Vista, Windows 7 and Windows 8 are all vulnerable.

This is as critical as it gets.

More on WordPress xmlrpc denial of service attacks

disable-xmlrpcThe attacks on WordPress using xmlrpc.php service are rather common. I already mentioned that you could filter out unwanted user-agents using the redirect capability of Apache. That would, however, take care only of obvious cases, where you see that this particular user-agent could not possibly be your reader. What do we do if the user-agent looks normal?

Well, if you do not need your xmlrpc services, you could block it off completely with mod_rewrite for all access:

<IfModule mod_rewrite.c>
 RewriteEngine On
 RewriteCond %{REQUEST_URI} ^/xmlrpc.php.*$
 RewriteRule .* - [F,L]
 </IfModule>

This will return a 403 for all requests. It is basically equivalent to what you did with “files” directive where you specify “Deny all” for a file path. This will block all access to xmlrpc completely though, for all purposes, so you will not be able to use the service at all. Which is not always acceptable.

But the good news is that the set of rules is extensible with other conditions and you could block only the requests with particular user-agent again now. For example:

<IfModule mod_rewrite.c>
 RewriteEngine On
 RewriteBase /
 RewriteCond %{REQUEST_URI} ^/xmlrpc.php.*$
 RewriteCond %{HTTP_USER_AGENT} ^.*NET CLR.*$ [OR]
 RewriteCond %{HTTP_USER_AGENT} ^.*Mozilla/5.0.*Windows.*NT.*6.*$
 RewriteRule .* - [F,L]
 </IfModule>

And so this becomes an extensible list of rules. You check your logs, see suspicious requests and add them to the list of rules. Stack the additional rules with [OR] flag at the end of the condition line.

Now we have a set of rules that blocks some of the accesses to the xmlrpc based on the user-agent reported by the attacker. We could also add filtering by referrer or IP ranges and so on. The arms race, you get the picture.

Mitigating Denial of Service attacks to WordPress xmlrpc

Distributed Denial of Service attackI have attracted attention, apparently. My website is under a Distributed Denial of Service (DDOS) attack by a botnet for the last week. I am flattered, of course, but I could live without a DDOS, frankly.

The requests go to xmlrpc.php every second or two from a different IP address from around the world:

POST /xmlrpc.php HTTP/1.1

At first I could not understand what was going on but it turns out that that request can be really expensive and the database basically gets overloaded with requests bringing the database server to a screeching halt after a while.

After trying to blackhole the IP addresses and finding out that the botnet is fairly large, I simply denied all access to xmlrpc.php. That is a simple and effective solution but it breaks some functionality that is expected of a WordPress site. I don’t like that. So I was looking for a way to block the attackers without crippling the site.

I noticed that all of the requests have a particular HTTP request user agent:

"Mozilla/4.0 (compatible; Win32; WinHttp.WinHttpRequest.5)"

So I redirect the requests with that user agent in .htaccess all back to themselves (you could also redirect it to 127.0.0.1 with the same effect):

# Block attackers by agents
 <IfModule mod_rewrite.c>
 RewriteCond %{HTTP_USER_AGENT} ^.*WinHttp.WinHttpRequest.5.*$
 RewriteRule .* http://%{REMOTE_ADDR}/ [R,L]
 </IfModule>

It seems to have mitigated the attacks by that particular botnet software while allowing access from all other browsers and sites. I hope it stays that way. I don’t think my site is really worthy of this kind of attention anyway.

Over-engineering

Causes for security problems are legion. One of the high pertinence problems in software development is called “over-engineering” – creation of over-complicated design or over-complicated code not justified by the complexity of the task at hand.

Often it comes as a result of the designer’s desire to show off, to demonstrate the knowledge of all possible methods, patterns and tricks. Most of the time it impresses only people who are not deeply involved with the subject. Usually it annoys people that know software design and the subject matter. Slightly more often than always it results in overly complicated code full of security holes.

XKCD on over-engineering

Of course, over-engineering is nothing new and even in the old times this was a popular thing. The problem is, where the old mechanical contraptions were ridiculous and did not really quite work in the real life, the contemporary computer software, although built in about the same way, manages somehow to struggle through its tasks. The electronics and software design industry are the most impacted by this illness that became an epidemic. Interestingly, this is one aspect where open source software is no different from commercial software – the open source designers also try to impress their peers with design and code. That is how things are.

The over-engineering in software is not just big, it is omnipresent and omnipotent. The over-engineering disease has captured the software industry even more completely than the advertising – the broadcasting. The results are predictably sad and can be seen with an untrained eye. Morbidly obese code is generated by morbidly obese development tools leading to a simply astonishing myriad of bugs in software that fails to perform its most apparent task and written with so much effort that writing the original in assembler would have been faster.

The implications are, of course, clear: the complexity is bad for security and the bugs are bad for security. Mind you, the two interplay in ways that exacerbate the problem. To top it off, the software becomes unmaintainable in a number of ways, including the total impossibility of predicting the impact of even small changes to the overall system.

The correctness of the software implementation is hard to judge on non-trivial amounts of code. Software is one of the most complicated things ever invented by mankind. So when the software gets more complex, we lose oversight and understanding of its inner working. On large systems, it is nowadays usual for no one to have a complete overview of how the system works. Over-engineered software has much more functionality than required, complicating the analysis and increasing the attack space. It tends to have functions that allow complex manipulation of parameters and tons of side effects. Compare making a reaping hook and a combine-harvester from the point of view of not harming any gophers.

Bugs proliferate in over-engineered software both for the reason of complexity and sheer size of the code, which go hand in hand. We know by now that there are more bugs in higher complexity software. There is direct correlation between the size of the code, complexity of the code, and the amount of bugs potentially present there. Some of those bugs are going to be relevant for security. The bad news is that quite often what cannot be achieved by an attacker through using a single bug, can be achieved through using a combination of bugs. A bug in one place could bring a system to an inconsistent state that could allow an attack somewhere else. And the more complicated the software becomes, the more “interesting” combinations of bugs there are. Especially when the software is over-engineered it allows for a much wider range of functionality to be accessed through security bugs.

As for the maintainability, an over-engineered system becomes harder and harder to maintain because it is not a straightforward implementation of a single concept. Anyone trying to update such software would have a hard time understanding the design and all implications of the necessary changes. I came across a serious problem once, where a developer had to inherit a class (in Java) to add additional security checks into one of the methods. By doing so, he actually ruined the complete access control system. That was unintended, of course, but the system was so complicated it was not apparent until it turned out in penetration testing that one could now send any administrative command to the server without authenticating and the server would happily comply. When the problem was discovered, it became apparent in hindsight, of course. However, the system is so complex, that it requires non-trivial amounts of effort to analyze impacts of changes. Any small change can easily turn into a disaster again.

The complex, non-maintainable code riddled with bugs becomes a security nightmare. The analysis of over-engineered software is not only expensive, it becomes sometimes simply impossible. Who would want to spend the next hundred years to analyze and fix a piece of software? But that is what I had to estimate for some of the systems that I came across. For such a system, there is no cure.

So, the next time you develop a system, follow the good old KISS principle: Keep It Simple, Stupid. If it worked for Kelly Johnson, it will work for you, too. When you maintain the code and make changes, try to bring the code size down and reduce functionality. Code size and complexity are directly correlated, so decreasing the KLOC count is a good indicator.

Camera and microphone attack on smartphones

Tactile-password-288x192The researches at the University of Cambridge have published a paper titled “PIN Skimmer: Inferring PINs Through The Camera and Microphone” describing a new approach to recovering PIN codes entered on a mobile on-screen keyboard. We had seen applications use the accelerometer and gyroscope before to infer the buttons pressed. This time, they use the camera to figure out where the fingers are touching after the microphone has signalled the start of a PIN entry. The success rate varies between 30% and 60% depending on configuration and number of samples. And that is a lot.

This attack falls into the category of side-channel attacks and it is rather hard to prevent. The paper explains in detail how the attack works and gives recommendations for mitigation to the developers. The paper also refers to several other works that use side-channel attacks using smartphone. For mobile application developers, it would be a wise idea to read through this and referenced publications to find out what the state of the art now is.

Google bots subversion

There is a lot of truth in saying that every tool can be used by good and by evil. There is no point in blocking the tools themselves as the attacker will turn to new tools and subvert the very familiar tools in unexpected ways. Now Google crawler bots were turned into such a weapon to execute SQL injection attacks against websites chosen by attackers.

it_photo_76483_200x133The discussion of whether Google should or should not do anything about that is interesting but we are not going to talk about that. Instead, think that this is a prime case of a familiar tool that comes back to your website regularly subverted into doing something evil. You did not expect that to happen and you cannot just block the Google from your website. This is a perfect example of a security attack where your application security is the only way to stop the attacker.

The application must be written in such a way that it does not matter whether it is protected by a firewall – you will not always be able to block the attacks with the firewall. The application must also be written so that it withstands an unanticipated attack, something that you were not able to predict in advance would happen. The application must be prepared to ward off things that are not there yet at the time of writing. Secure design and coding cannot be replaced with firewalls and add-on filtering.

Only such securely designed and implemented applications withstand unexpected attacks.

Posts navigation

1 2 3