Passwords and other secrets in source code

key-under-matSecrets are bad. Secrets in source code are an order of magnitude worse.

Secrets are difficult to protect. Every attacker goes after the secrets and we must protect our secrets against all of them. The secrets are the valuable part of our software and that’s why they are bad – they represent an area of heightened risk.

What would a developer do when his piece of software needs to access a password protected server? That’s right, he will write the user name and the password into some constant and compile them into the code. If the developer wants to be clever and heard the word “security” before, he will base64 encode the password to prevent it from “being read by anyone”.

The bad news is, of course, that whoever goes through the code will be able to follow the algorithm and data through and recover the user name and password. Most of the software is available to anyone in its source form, so that is not a stretch to assume an attacker will have it as well. Moreover, with the current level of binary code scanning tools, they do not need the source code and do not need to do anything manually. The source and binary scanners pick out the user names and passwords easily, even when obscuring algorithms are used.

So, the password you store in the source code is readily available. It’s really like placing the key to your home under the doormat. It’s that obvious.

Now, you shipped that same code to every customer. That means that the same password works at every of those sites. Your customers and whoever else got the software in their hands can access all of the sites that have your software installed with the same password. And to top it off, you have no way of changing the password short of releasing a new version with the new password inside.

Interestingly, Facebook had this as one of their main messages to the attendees of the F8 Developers Conference: “Facebook security engineer Ted Reed offered security suggestions of a more technical nature. Reed recommended that conference attendees—particularly managers or executives that oversee software development—tell coders to remove any secret tokens or keys that may be lurking around in your company’s source code.”

Which means the story is far from over. Mainstream applications continue to embed the secrets into the source code defying the attempts to make our software world secure.

The thought of compiling the user names and passwords into the application should never cross your mind. If it does, throw it out. It’s one of those things you just don’t do.

Security Forum Hagenberg 2015

sf_logoI will be talking about the philosophy in engineering or the human factor in the development of secure software at the Security Forum in Hagenberg im Mühlkreis, Austria on 22nd of April.

My talk will concentrate on the absence of a holistic, systemic approach in the current software development as a result of taking the scientific approach of “divide and conquer” a bit too far and applying it where it should not be.

It may seem at first that philosophy has nothing to do with software development or with security but that is not so. We, the human beings, operate on a particular philosophical basis, the basis of thinking, whether implied or explicit. Our technologies are fine, what needs change is the human that applies and develops technologies. And the change starts in the mind, with the philosophy that we apply to our understanding of the world.

House key versus user authentication

key_goldI got an interesting question regarding the technologies we use for authentication that I will discuss here. The gist of the question is that we try to go all out on the technologies we use for the authentication, even trying unsuitable technologies like biometrics, while, on the other hand, we still use fairly simple keys to open our house doors. Why is that? Why is the house secured with a simple key that could be photographed and copied and it seems sufficient nevertheless? Why then, for example, the biometrics is not enough as an authentication mechanism by comparison?

Ok, so let’s first look at the house key. The key is not really an identification or authentication device. It is an authorization device. The key says “I have the right to enter” without providing any identity whatsoever. So the whole principle is different here: whoever has the key has the authorization to enter. That’s why we protect the key and try not to give it to anyone – obtaining the key or obtaining a copy of the key is equivalent to obtaining an authorization to perform an action, like using the house.

Now, even if you do have a key, but you obtained it without the permission, that does not make you the owner of the place. You are still an intruder, so if someone happens to be around, they will identify you as an intruder and call the police who will be able to verify (authenticate) you as an intruder with an improperly obtained authorization (key). So we have deterrents in place that will provide additional layers of protection and we do not really need to go crazy on the keys themselves.

Should we have an authentication system compromised, however, the intruder would not be identified as such. On the contrary, he will be identified and authenticated as a proper legitimate user of the system with all the authorizations attached. That is definitely a problem – there is no further layer of protection in this case.

In the case of the house, passing an authentication would be equivalent to producing a passport and letting police verify you as the owner of the house, then breaking down the door for you because you lost your key. Well, actually, issuing you with a copy of the key, but you get the point. The false authentication runs deeper in the sense of the problems and damage it can cause than the authorization. With wrong authorization you can sometimes get false authentication credentials but not always. With improper authentication you always get improper authorization.

Workshop on Agile Development of Secure Software (ASSD’15)

Call for Papers:

First International Workshop on Agile Development of Secure Software (ASSD’15)

ARES7_2in conjunction with the 10th International Conference on Availability, Reliability and Security (ARES’15) August 24-28, 2015, Université Paul Sabatier, Toulouse, France

Submission Deadline: April 15, 2015

Workshop website:


Most organizations use the agile software development methods, such as Scrum and XP for developing their software. Unfortunately, the agile software development methods are not well suited for the development of secure systems; they allow change of requirements, prefer frequent deliveries, use lightweight documentation, and their practices do not include security engineering activities. These characteristics limit their use for developing secure software. For instance, they do not consider conflicting security requirements that emerge in different iterations.

The goal of the workshop is to bring together security and software development researchers to share their finding, experiences, and positions about developing secure software using the agile methods. The workshop aims to encourage the use of scientific methods to investigate the challenges related to the use of the agile approach to develop secure software. It aims also to increase the communication between security researchers and software development researchers to enable the development of techniques and best practices for developing secure software using the agile methods.

 Topics of interest

The list of topics that are relevant to the ASSD workshop includes the following, but is not limited to:

  • Challenges for agile development of secure software
  • Processes for agile development of secure software
  • Incremental development of cyber-physical systems
  • Secure software development training and education
  • Tools supporting incremental secure software development
  • Usability of agile secure software development
  • Security awareness for software developers
  • Security metrics for agile development
  • Security and robustness testing in agile development

 Important dates

Submission Deadline:     April 15, 2015

Author Notification:        May 11, 2015

Proceedings version:      June 8, 2015

Conference:                       August 24-28, 2015

About the so-called “uncertainty principle of new technology”

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances. Namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and the academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead. Call it “Zenkoff Principle”.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.


P.S. Perhaps I should have written “quality” with a capital “Q” all over because it is not in the sense of “quality assurance” that I use the term but the inherent quality of everything called “arete” by Greeks that originates both form and substance of the new inventions.

Sony 2014 network breach, the most interesting question remains unanswered

The November 2014 breach of security at Sony Corporation remains the subject of conversation throughout the end of the year. Many interesting details have become known while even more remains hidden. Most claims and discussions only serve to create noise and diversion though.

Take the recent discussion of the antivirus software, for example. Sony Corporation uses antivirus software internally, it’s Norton, TrendMicro or McAfee depending on the model and country (Sony uses Vaio internally). So I would not put much stock into the claims of any of the competitors in the antivirus software market that their software would have stopped the attackers. And it’s irrelevant anyway. The breach was so widespread and the attackers had such totality of control that no single tool would have been enough.

The most interesting question remains unanswered though. Why did the attackers decide to reveal themselves? They were in the Sony networks for a long time, they extracted terabytes of information. What made them go for a wipeout and publicity?

Was publicity a part of a planned operation? Were the attackers detected? Were they accidentally locked out of some systems?

What happened is a very important question because in the former case the publicity is a part of the attack and the whole thing is much bigger than just a network break-in. In the latter cases Sony is lucky and it was then indeed “just” a security problem and an opportunistic break-in.

Any security specialist should be interested to know that bigger picture. Sony should be interested most of all, of course. For them, it’s a matter of survival. Given their miserable track record in security, I doubt they are able to answer this question internally though. So it’s up to the security community, whether represented by specialist companies or by researchers online, to answer this most important question. If they can.


ENISA published new guidelines on cryptography

eu-data-protectionEuropean Union Agency for Network and Information Security (ENISA) has published the cryptographic guidelines “Algorithms, key size and parameters” 2014 as an update to the 2013 report. This year, the report has been extended to include a section on hardware and software side-channels, random number generation, and key life cycle management. The part of the previous report concerning protocols has been extended and converted to a separate report “Study on cryptographic protocols“.

The reports together provide a wealth of information and clear recommendations for any organization that uses cryptography. Plenty of references are served and the document is a great starting point for both design and analysis.

Heartbleed? That’s nothing. Here comes Microsoft SChannel!

microsoft_securityThe lot of hype around the so-called “Heartbleed” vulnerability in open-source cryptographic library OpenSSL was not really justified. Yes, many servers were affected but the vulnerability was quickly patched and it was only an information disclosure vulnerability. It could not be used to break into the servers directly.

Now we have Microsoft Secure Channel library vulnerability (“SChannel attack”) that allows an attacker to easily own MS servers:

This security update resolves a privately reported vulnerability in the Microsoft Secure Channel (Schannel) security package in Windows. The vulnerability could allow remote code execution if an attacker sends specially crafted packets to a Windows server.

This vulnerability in Microsoft TLS is much more serious as it allows to take over the control of any vulnerable server remotely by basically simply sending packets with commands. Microsoft admits that there are no mitigating factors and no workarounds, meaning if you did not install the patch, your server is defenseless against the attack. Windows Server 2012, Windows Server 2008 R2 and Windows Server 2003, as well as workstations running Vista, Windows 7 and Windows 8 are all vulnerable.

This is as critical as it gets.

Visualization of world’s largest data breaches

I stumbled upon a very interesting infographic that portrays some of the world’s biggest data breaches in a running bubble diagram. Entertaining and potentially useful in presentations Have a look.


Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone)

android-devilAn excellent article by Sven Tuerpe argues that we pay excessive attention to the problems of encryption and insufficient – to the problems of system security. I wholeheartedly agree with that statement. Read the original article: Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone).

Security cannot be based solely on the encryption and encryption only. The system must be built to withstand attacks from outside and from within to be secure. There is a lot of expertise in building secure devices and creating secure software but none of that is used at all in the mobile devices of today. Whether those smartphones and tablets provide encryption or not is simply besides the point in most attack scenarios and for most kinds of usage. We have to get the devices secured in the first place before the discussion of encryption on them would begin to make sense.

© 2012-2015 Holy Hash!

Contact | Up ↑