House key versus user authentication

key_goldI got an interesting question regarding the technologies we use for authentication that I will discuss here. The gist of the question is that we try to go all out on the technologies we use for the authentication, even trying unsuitable technologies like biometrics, while, on the other hand, we still use fairly simple keys to open our house doors. Why is that? Why is the house secured with a simple key that could be photographed and copied and it seems sufficient nevertheless? Why then, for example, the biometrics is not enough as an authentication mechanism by comparison?

Ok, so let’s first look at the house key. The key is not really an identification or authentication device. It is an authorization device. The key says “I have the right to enter” without providing any identity whatsoever. So the whole principle is different here: whoever has the key has the authorization to enter. That’s why we protect the key and try not to give it to anyone – obtaining the key or obtaining a copy of the key is equivalent to obtaining an authorization to perform an action, like using the house.

Now, even if you do have a key, but you obtained it without the permission, that does not make you the owner of the place. You are still an intruder, so if someone happens to be around, they will identify you as an intruder and call the police who will be able to verify (authenticate) you as an intruder with an improperly obtained authorization (key). So we have deterrents in place that will provide additional layers of protection and we do not really need to go crazy on the keys themselves.

Should we have an authentication system compromised, however, the intruder would not be identified as such. On the contrary, he will be identified and authenticated as a proper legitimate user of the system with all the authorizations attached. That is definitely a problem – there is no further layer of protection in this case.

In the case of the house, passing an authentication would be equivalent to producing a passport and letting police verify you as the owner of the house, then breaking down the door for you because you lost your key. Well, actually, issuing you with a copy of the key, but you get the point. The false authentication runs deeper in the sense of the problems and damage it can cause than the authorization. With wrong authorization you can sometimes get false authentication credentials but not always. With improper authentication you always get improper authorization.

Workshop on Agile Development of Secure Software (ASSD’15)

Call for Papers:

First International Workshop on Agile Development of Secure Software (ASSD’15)

ARES7_2in conjunction with the 10th International Conference on Availability, Reliability and Security (ARES’15) August 24-28, 2015, Université Paul Sabatier, Toulouse, France

Submission Deadline: April 15, 2015

Workshop website:

http://www.ares-conference.eu/conference/workshops/assd-2015/

Scope

Most organizations use the agile software development methods, such as Scrum and XP for developing their software. Unfortunately, the agile software development methods are not well suited for the development of secure systems; they allow change of requirements, prefer frequent deliveries, use lightweight documentation, and their practices do not include security engineering activities. These characteristics limit their use for developing secure software. For instance, they do not consider conflicting security requirements that emerge in different iterations.

The goal of the workshop is to bring together security and software development researchers to share their finding, experiences, and positions about developing secure software using the agile methods. The workshop aims to encourage the use of scientific methods to investigate the challenges related to the use of the agile approach to develop secure software. It aims also to increase the communication between security researchers and software development researchers to enable the development of techniques and best practices for developing secure software using the agile methods.

 Topics of interest

The list of topics that are relevant to the ASSD workshop includes the following, but is not limited to:

  • Challenges for agile development of secure software
  • Processes for agile development of secure software
  • Incremental development of cyber-physical systems
  • Secure software development training and education
  • Tools supporting incremental secure software development
  • Usability of agile secure software development
  • Security awareness for software developers
  • Security metrics for agile development
  • Security and robustness testing in agile development

 Important dates

Submission Deadline:     April 15, 2015

Author Notification:        May 11, 2015

Proceedings version:      June 8, 2015

Conference:                       August 24-28, 2015

About the so-called “uncertainty principle of new technology”

It has been stated that the new technology possesses an inherent characteristic that makes it hard to secure. This characteristic is articulated by David Collingridge in what many would like to see accepted axiomatically and even call it the “Collingridge Dilemma” to underscore its immutability:

That, when a technology is new (and therefore its spread can be controlled), it is extremely hard to predict its negative consequences, and by the time one can figure those out, it’s too costly in every way to do much about it.

This is important for us because this may mean that any and all efforts we do on securing our systems are bound to fail. Is that really so? Now, this statement has all of the appearance to sound true but there are two problems with it.

First, it is subject to the very same principle. This is a new statement that we do not quite understand. We do not understand if it is true and we do not understand what the consequences are either way. By the time we understand whether it is true or false it will be deeply engraved in our development and security culture and it will be very hard to get rid of. So even if it was useful, one would be well advised to exercise extreme caution.

Second, the proposed dilemma is only true under a certain set of circumstances. Namely, when the scientists and engineers develop a new technology looking only at the internal structure of the technology itself without any relation to the world, the form, and the quality. Admittedly, this is what happens most of the time in academia but it does not make it right.

When one looks only at the sum of parts and their structure within a system, let’s say, one can observe that parts could be exchanged, modified and combined in numerous ways often leading to something that has potential to work. This way, the new technologies and things can be invented indefinitely. Are they useful to the society, the world and the life as we know it? Where is the guiding principle that tells us what to invent and what – not? Taken this way, the whole process of scientific discovery loses its point.

The scientific discovery is guided by the underlying quality of life that guides it and shapes its progress. The society influences what has to be invented, whether we like it or not. We must not take for granted that we are always going the right way though. Sometimes, the scientists should stand up for fundamental principles of quality over the quantity of inventions and fight for the technology that would in turn steer the society towards better and more harmonious life.

Should the technology be developed with utmost attention to the quality that it originates from, should the products be built with the quality of life foremost in the mind, this discussion would become pointless and the academic dilemma would not exist. Everything that is built from the quality first remains such forever and does not require all this endless tweaking and patching.

We can base our inventions and our engineering on principles different than those peddled to us by the current academia and industry. We can re-base the society to take the quality first and foremost. We can create technologically sound systems that will be secure. We just have to forgo this practicality, the rationality that guides everything now even to the detriment of life itself and concentrate on the quality instead. Call it “Zenkoff Principle”.

The beauty and harmony of proper engineering have been buried in our industry under the pressure of rationality and the rush of delivery but we would do better to re-discover it than to patch it with pointless and harmful excuses.

st-petersburg-open-bridge

P.S. Perhaps I should have written “quality” with a capital “Q” all over because it is not in the sense of “quality assurance” that I use the term but the inherent quality of everything called “arete” by Greeks that originates both form and substance of the new inventions.

Sony 2014 network breach, the most interesting question remains unanswered

The November 2014 breach of security at Sony Corporation remains the subject of conversation throughout the end of the year. Many interesting details have become known while even more remains hidden. Most claims and discussions only serve to create noise and diversion though.

Take the recent discussion of the antivirus software, for example. Sony Corporation uses antivirus software internally, it’s Norton, TrendMicro or McAfee depending on the model and country (Sony uses Vaio internally). So I would not put much stock into the claims of any of the competitors in the antivirus software market that their software would have stopped the attackers. And it’s irrelevant anyway. The breach was so widespread and the attackers had such totality of control that no single tool would have been enough.

The most interesting question remains unanswered though. Why did the attackers decide to reveal themselves? They were in the Sony networks for a long time, they extracted terabytes of information. What made them go for a wipeout and publicity?

Was publicity a part of a planned operation? Were the attackers detected? Were they accidentally locked out of some systems?

What happened is a very important question because in the former case the publicity is a part of the attack and the whole thing is much bigger than just a network break-in. In the latter cases Sony is lucky and it was then indeed “just” a security problem and an opportunistic break-in.

Any security specialist should be interested to know that bigger picture. Sony should be interested most of all, of course. For them, it’s a matter of survival. Given their miserable track record in security, I doubt they are able to answer this question internally though. So it’s up to the security community, whether represented by specialist companies or by researchers online, to answer this most important question. If they can.

a-colored-version-of-the-big-wave

ENISA published new guidelines on cryptography

eu-data-protectionEuropean Union Agency for Network and Information Security (ENISA) has published the cryptographic guidelines “Algorithms, key size and parameters” 2014 as an update to the 2013 report. This year, the report has been extended to include a section on hardware and software side-channels, random number generation, and key life cycle management. The part of the previous report concerning protocols has been extended and converted to a separate report “Study on cryptographic protocols“.

The reports together provide a wealth of information and clear recommendations for any organization that uses cryptography. Plenty of references are served and the document is a great starting point for both design and analysis.

Heartbleed? That’s nothing. Here comes Microsoft SChannel!

microsoft_securityThe lot of hype around the so-called “Heartbleed” vulnerability in open-source cryptographic library OpenSSL was not really justified. Yes, many servers were affected but the vulnerability was quickly patched and it was only an information disclosure vulnerability. It could not be used to break into the servers directly.

Now we have Microsoft Secure Channel library vulnerability (“SChannel attack”) that allows an attacker to easily own MS servers:

This security update resolves a privately reported vulnerability in the Microsoft Secure Channel (Schannel) security package in Windows. The vulnerability could allow remote code execution if an attacker sends specially crafted packets to a Windows server.

This vulnerability in Microsoft TLS is much more serious as it allows to take over the control of any vulnerable server remotely by basically simply sending packets with commands. Microsoft admits that there are no mitigating factors and no workarounds, meaning if you did not install the patch, your server is defenseless against the attack. Windows Server 2012, Windows Server 2008 R2 and Windows Server 2003, as well as workstations running Vista, Windows 7 and Windows 8 are all vulnerable.

This is as critical as it gets.

Visualization of world’s largest data breaches

I stumbled upon a very interesting infographic that portrays some of the world’s biggest data breaches in a running bubble diagram. Entertaining and potentially useful in presentations Have a look.

visual-data-breaches-2014-11-11

Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone)

android-devilAn excellent article by Sven Tuerpe argues that we pay excessive attention to the problems of encryption and insufficient – to the problems of system security. I wholeheartedly agree with that statement. Read the original article: Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone).

Security cannot be based solely on the encryption and encryption only. The system must be built to withstand attacks from outside and from within to be secure. There is a lot of expertise in building secure devices and creating secure software but none of that is used at all in the mobile devices of today. Whether those smartphones and tablets provide encryption or not is simply besides the point in most attack scenarios and for most kinds of usage. We have to get the devices secured in the first place before the discussion of encryption on them would begin to make sense.

Facebook “joins” Tor – good-bye, privacy!

Multiple publications are touting the announcement by Facebook of a Tor-enabled version of the social networking website as nothing short of a breakthrough for anonymous access from “repressed nations”. They think that the people around the world who wish their identity and activity online to remain hidden will now have a great time of using Facebook through Tor.

In my point of view, the result is just the opposite. The users of Facebook sign in and are tracked across a multitude of collaborating sites. Using Facebook through Tor will actually disclose completely the identity and the activity of the person using it. This information will become available across several user-tracking websites. The user will completely lose the anonymity they so strongly desired.

Mozilla Firefox Lightroom-578-80

Lightbeam for Firefox shows tracking of the user through different websites and tracking networks and how they share information with each other.

Facebook previously denied access to its social network through the Tor network citing security concerns. Surely, you do not think they decided to provide Tor access because they decided to be nice to those few who use Tor? Facebook is a commercial company under control of United States government and don’t you forget it. The move to bring in a few thousand Tor users is unlikely to have any positive impact on their business but will require to provide additional infrastructure. Therefore, Facebook is acting selflessly and causing themselves trouble for no commercial gain. I view such a move as extremely suspicious. Most likely, the company’s network will be used in online operations to unmask the identity of Tor users.

Of course, the proper way to keep your privacy online is to never use any social networks of any kind and discard every session after a short period and when switching activities. Searching for movie tickets? Use a session and discard it when done. Looking up the hospital’s admission hours? Discard when done. In any other case, the network of tracking sites will connect the dots on you. If you are to use the Facebook in the same session, your identity is revealed instantly and all of that activity will be linked to the real you.

We released too much of our privacy to the Internet companies already. They are now slowly dismantling the last bastions, one of which is the Tor network, under the pretense of fighting online crime. Facebook, having a history of abusing its customers, should not be trusted on these matters. Their interest is not in protecting your privacy, they will betray you for money, rest assured.

Three roads to product security

three-roadsI mentioned previously that there are three ways to secure a product from the point of view of a product manufacturing company. Here is a little more detailed explanation. This is my personal approach to classifying product security and you do not have to stick to this but I find it useful when creating or upgrading company’s security. I call these broad categories the “certification”, “product security” and “process security” approach. Bear in mind that my definition of security is also much broader than conventional.

The first approach is the simplest. You outsource your product security to another company. That external company, usually a security laboratory, will check your product’s security including as many aspects as necessary for a set target level of security assurance and will vouch for your product to your clients. This does not have to be as complicated and formal as the famous Common Criteria certification. This certification may be completely informal but it will provide a level of security assurance to your clients based on the following parameters: in how far the customers trust the lab, what was the target security level set for the audit and how well the product has fared. Some financial institutions will easily recognize the scheme because they often use a trusted security consultancy to look into the security of products supplied to them.

Now, this approach is fine and it allows you to keep the security outside with the specialists. There are of course a few problems with this approach too. Main problems are that it may be very costly, especially when trying to scale up, and it usually does not improve the security inside the company that makes the product.

So, if the company desires to build security awareness and plans to provide more than a single secure product, it is recommended that a more in-house security approach is chosen. Again, the actual expertise may come from outside, but the company in the following two approaches actually changes internally to provide a higher degree of security awareness.

One way is to use what I call “product security”. This is when you take a product and try to make it as secure as required without actually looking at the rest of the company. You only change those parts of the production process that directly impact the security and leave alone everything else. This approach is very well described by the “Common Criteria” standard. We usually use the Common Criteria for security evaluations and certifications but this is not required. You may simply use the standard as a guideline to your own implementation of the security in your products according to your own ideas of the level of security you wish to achieve. However, Common Criteria is an excellent guide that builds on the experience of many security professionals and can be safely named the only definitive guide to product security in the current world.

Anyway, in the “product security” approach you will only be changing things that relate directly to the product you are trying to secure. That means that there will be little to no impact on the security of other products but you will have one secure product in the end. Should you wish to make a second secure product, you will apply the same.

Now, of course, if you want to make all products secure it makes sense to apply something else, what I call “process security”. You would go and set up a security program that makes sure that certain processes are correctly executed, certain checks are performed, certain rules are respected and all of that together will give you an increase in security of all of your products across the company. Here we are seeing an orthogonal approach where you will not necessarily reach the required level of security very fast but you will be improving the security of everything gradually and equally.

This “process security” approach is well defined in the OpenSAMM methodology that could be used as a basis for the implementation of security inside the company. Again, OpenSAMM can be used for audits and certifications but you may use it as a guide to your own implementation. Take the parts that you think you need and adapt to your own situation.

The “process security” takes the broad approach and increases the security gradually across the board while the “product security” will deliver you quickly a single secure product with improvements to other products being incidental. A mix of the two is also possible, depending on priorities.

process-product-security

© 2012-2015 Holy Hash!

Contact | Up ↑