Holy Hash!

#security on software development security and web security, security best practices and discussions, break-ins and countermeasures. Everything you ever wanted to know about software security but were afraid to ask, for fear of not understanding the answer!

Dark alleys of cybersecurity

polar-bear-facepalmThe security of the so-called “cyberspace” has deteriorated beyond belief. Some people tell me that my stories are far-fetched and that I view the security and computer industry with some sort of a depressing negativism. I disagree. The problem is, I am trying to stay positive and optimistic. My tales rarely go to the full extent of what is happening. The reality is much worse and scarier. Why do we tend to think then that Internet reality is all cheerful and pink? Because our judgement is severely distorted by our perception of the Internet world.

When you walk around town, you come across various parts and you are usually able to assess the dangers in a valid way. You walk on a wide street, there are sufficiently many people around but not too many to invade your personal space. The street is well lit or it’s day time. There is a policeman on the corner… What do you feel like? Your body tells you it is all safe. Your image recognition and other parameters are assessed automatically and provide a relaxing feeling of “it’s all right.”

Now imagine you are walking at night through a dark part of town. Small streets, poorly lit, the people are scarce. You are approaching a dark alley, it smells funny, there are some indistinct shadows moving ahead. A police siren wails in a distance… How do you feel? You tense up, ready your “fight or flight” reflex you inherited from stone age that keeps you alive in situations like this. Your body sends a clear signal: this place is dangerous. You have assessed your situation correctly.

dark alleyLet’s now go onto the Internet. We can do it from various places with various devices but let’s stay traditional for the this example. You sit at home, at your desk, wearing comfortable home clothes, your slippers are on, the evening is outside but inside it is all warm and cozy, you have your cup of coffee at your elbow and you visit a website. A bad one. One from the dark alleys of the Internet.

What do your senses tell you about the website you are visiting? Or even about the state of your own computer? Well, basically, nothing. Your standard human senses are dealing with the standard stone age parameters: you are at home, in safety, it’s warm, you feel protected, there is food, no danger. The body is sending you the signal to relax. However, that signal has nothing to do with what you are doing at the moment. You assessment of the situation may be completely wrong.

And therein lies the problem. We are not equipped to recognize the dangers of the Internet. Whatever we do at the computer screen, our feelings of comfort and safety are not influenced at all by our actions. Therefore, we cannot rely on those most basic instincts of whether something is safe to do or not anymore. Not when we are in cyberspace.

The only way to assess adequately the dangers of the Internet is to learn to think about them logically. To perform a logical assessment of the danger of entering a website you must intentionally exclude the cozy bodily feeling from your equations. The equations will also require education and practice. You must learn logically what a good behavior is, what a bad site might look like, what a suspicious activity is and so on. Just the way you learn to drive a car. It takes knowledge, training and cool logical thinking to drive the car without causing accidents all the time. Training and education will over time result in a new kind of situational awareness that will allow you to assess your situation and your actions on the Internet correctly.

Failing that, think of the Internet as a dark alley full of indistinct but dangerous looking shadows. It might help. Or, better, ask someone who knows to help.

Strategy towards more IT security: the road paved with misconceptions

The strategy towards more IT security in the “Internet of Things” is based a little more than entirely on misconceptions and ignorance. The policy makers simply reinforce each other’s “ideas” without any awareness of where the road they follow is leading.

As I listened on in the K-ITS 2014 conference, it became painfully obvious that most speakers should not be speaking at all. They should be listening. The conference is supposed to discuss the strategies towards more IT security in the future industry that will have both factories and cars connected to the Internet. That future isn’t bright, far from. We are fighting battles on the internet for the web servers, personal computers and mobile phones now. We will be fighting battles for refrigerators, nuclear power plants and medical implants in the near future. We definitely need to have some better ideas for those battle plans. Instead, we hear, if anything, the ideas on improving the attitudes of buyers, i.e. “how can we convince the customers that our security is okay and they should pay more?”

I detail here five different misconceptions that were very obvious and widespread in the conference. Even security management at the top level shares this, though they should know better. And the worst part is, they all seem to believe that it will be all right if they throw some important sounding names and acronyms at it.

iot-140113.bigdata

Divide security into “levels”

A prominent theme is the division of the industrial landscape into various “areas” of differing security requirements. There is nothing wrong with the concept itself, of course, except that it is applied in a context where it will do more harm than good.

The policy makers seem to think that they can divide the industry into ‘critical infrastructure’, ‘things that need security’, and ‘things that do not need security’. Right, for the sake of an argument, assume we can. Then what? And then, they say, we will invest in security where it matters most. That, on the surface, looks like a sound plan.

The problems start when you try to apply the said concept to the software development. How do we distinguish between software written for ‘secure’ and ‘insecure’ applications? How do we make authors of libraries and tools to write their software to the highest standards to satisfy the ‘most secure’ part of the industry? What about the operating systems they use? What about people that wander from one company to another, bringing not only expertise but mistakes and security holes with them?

Once you start thinking about this approach in practical terms, it quickly becomes untenable.

The only way to improve the security of any software is to improve the security level of the whole software industry. The software not written specifically for a high security environment will end up there whether we want it or not. Developers not skilled and not trained for writing secure software will. It’s unavoidable.

But that is only one side of the problem. Why have the division in the first place? Yes, critical infrastructure is critical, but that stupid mirror with a network interface will also end up in a secure facility and how do we know what the next attack path will look like? The noncritical infrastructure will be used to attack critical infrastructure, isn’t it obvious? All infrastructure, all consumer devices need protection if we want to have a secure Internet of Things.

The software for all purposes is written by the same underpaid people that never had proper security education everywhere. The general tendency for software quality and security is, unfortunately, to get worse. As it gets worse everywhere it does, of course, get worse for the critical infrastructure as well as for consumer electronics.

Investment should be done into the state of software in general, not into the state of some particular software. Otherwise, it won’t work.

Security should not prevent innovation

Says who? Not that I am against innovation but security must sometimes prevent certain innovation, like tweaking of cryptographic algorithms that would break security. There is such thing as bad or ill-conceived innovation from the point of view of security (and, actually, from every other point of view, too). Wait, it gets worse.

‘Innovation’ has become the cornerstone of the industry, the false god that receives all our prayers. There is nothing wrong with innovation per se but it must not take over the industry. The innovation is there to serve us, not the other way around. We took it too far, we pray to innovation in places where it would not matter or be even harmful. Innovation by itself, without a purpose, is useless.

iot-construction-c13-3We know that this single-minded focus will result in security being ignored time and again. There is too much emphasis on short-term success and quick development resulting not only in low security but low quality overall.

Finding ways of doing things properly is the real innovation. Compare to civil engineering, building houses, bridges, nuclear power stations. What would happen if the construction industry was bent on innovation and innovation only, on delivering constructions now, without any regard to proper planning and execution? Well, examples are easy to find and the results are disastrous.

What makes the big difference? We can notice the bridge collapsing or a building falling down, we do not need to be experts in construction for that. Unfortunately, collapsing applications on the Internet are not that obvious. But they are there. We really need to slow down and finally put things in order. Or do we wait for things to collapse first?

Convince the customer

iot-fridgeWe are bent on convincing the customer that things are secure. Not making things secure but convincing everyone around that we are fine. Engaging in plays of smoke and mirrors that is. Instead of actually making things better we announce that pretending things are better will somehow make them better. And we try and succeed to convince ourselves that this is okay somehow.

Well, it is not okay. We all understand the desire of commercial companies to avoid security publicity. We know that eventually people do catch up anyway. There is such a rush to convince everyone and their grandma that things are going to be better precisely because people will be catching up on this foul play soon.

The market will shrink if people think that there are security problems but the market will crash when people find out they were lied to and your words are not worth the electrons they use to come across the internet. The deception of ourselves will lead to a disaster and we have no way of controlling that. This is simply a fast track to security by obscurity.

Secure components mean secure systems

There is a commonly shared misconception that using secure components will somehow automatically lead to secure systems. When confronted with this question directly, people usually quickly realise their folly and will likely fervently deny such thinking but it is sufficient to listen to a presentation to realise that that is exactly the assumption behind many plans.

Secure components are never secure unconditionally. They are what we call conditionally secure. They are secure as long as a certain set of assumptions remains valid. Once an assumption is broken, not met, the component is not any longer secure. Who checks for those assumptions? Who verifies whether the developers upheld all of the assumptions that the developers of underlying components specified? Who checks what assumptions remained undocumented?

When we combine the components together we create a new problem, the problem of composition. This is not an easy problem at all. By having two secure components put together, you don’t automatically obtain a secure system. It may well be. Or it may be not.

This problem of secure composition is well known to the developers and auditors of smart cards. And they do not claim to have a solution. And here we are, developers of systems orders of magnitude more complex, dismissing the problem out of our minds like if it’s not even worth our consideration. That’s a folly.

We need those things on the internet

Who said that factories need to be on the internet? Who said that every single small piece of electronics or an electric device really needs to be on the internet? Why do we think that having all of those things “talk” to each other would make us all suddenly happy?

The industry and the governments do not want to deal with any of the real problems plaguing the societies world over. Instead, they want to produce more and more useless stuff that allows them to appear like if they do something useful. They will earn lots of money and waste a lot more resources in the progress. Should they be worried?

iot_talking_carsTake “smart cars”, for example, cars that communicate to each other over some wireless protocol to tell about accidents, road condition, traffic jams. Think about it. A car cannot communicate very far away. On a highway, by the time you get news of a traffic jam from your neighbour cars, you will be standing in it. In the city, this information will be equally useless, because you will see the traffic jam and do what you always did: turn around and go look for another street around the block. What of accidents? Again, that information is not much use to you in the city, where you basically don’t need it. They say, cars will inform each other of the accidents but this information cannot be transmitted too far away. By the time your car has information about an accident on the highway ahead, displays it and you read it, you will be starting at it. The civil engineers are not that stupid, you know. They make highways so that you have enough time to see what is around the corner and react. Extra information would only distract the driver there. So this whole idea is completely useless from the point of view of driving but it will require enormous resources and some genius security solutions to artificially created problems.

And all of it is like that. We don’t need an “internet of things” in the first place. We should restrict what gets on the internet, not encourage the uncontrollable proliferation of devices arbitrarily connected to the network simply to show off. Yes, we can. But should we?

Mitigating Denial of Service attacks to WordPress xmlrpc

Distributed Denial of Service attackI have attracted attention, apparently. My website is under a Distributed Denial of Service (DDOS) attack by a botnet for the last week. I am flattered, of course, but I could live without a DDOS, frankly.

The requests go to xmlrpc.php every second or two from a different IP address from around the world:

POST /xmlrpc.php HTTP/1.1

At first I could not understand what was going on but it turns out that that request can be really expensive and the database basically gets overloaded with requests bringing the database server to a screeching halt after a while.

After trying to blackhole the IP addresses and finding out that the botnet is fairly large, I simply denied all access to xmlrpc.php. That is a simple and effective solution but it breaks some functionality that is expected of a WordPress site. I don’t like that. So I was looking for a way to block the attackers without crippling the site.

I noticed that all of the requests have a particular HTTP request user agent:

"Mozilla/4.0 (compatible; Win32; WinHttp.WinHttpRequest.5)"

So I redirect the requests with that user agent in .htaccess all back to themselves (you could also redirect it to 127.0.0.1 with the same effect):

# Block attackers by agents
 <IfModule mod_rewrite.c>
 RewriteCond %{HTTP_USER_AGENT} ^.*WinHttp\.WinHttpRequest\.5.*$
 RewriteRule .* http://%{REMOTE_ADDR}/ [R,L]
 </IfModule>

It seems to have mitigated the attacks by that particular botnet software while allowing access from all other browsers and sites. I hope it stays that way. I don’t think my site is really worthy of this kind of attention anyway.

Cheap security in real life?

Security concerns are on the rise, companies are beginning to worry about the software they use. I received again a question that bears answering for all the people and all the companies out there because this is a situation that happens often nowadays. So here is my answer to the question that can be formulated thus:

“We are making a software product and our customer became interested in security. They are asking questions and offer to audit our code. We never did anything specifically for security so we worry what they might find in our code. How can we convince the customer that our product security is ok?”

There are, basically, three approaches to demonstrating your product security if we take that question as meaning “how can we make sure our software is secure?” Unfortunately, the question is not meant that way. Unfortunately, the company producing the software is not interested in security and the meaning of the question is rather “how can we make the customer get off our backs while we keep producing insecure software?”

Thatkey under mat boils down to the switch from “security by ignorance” to “security by obscurity”, as I explained in one of my earlier posts titled “Security by …”. That is, of course, the cheapest possible solution in the short run. However, it does not eliminate the risk of company suddenly going bankrupt due to a catastrophic security breach in one of its products. Sony Corporation lost $190 million during their PlayStation Network hiccup a few years back. Can your company survive this kind of sudden loss? Would it not be better to invest a few hundred thousand in product security to ensure the continuity of your business in the long run?

But nevertheless the question was asked and what is a company to do when it is not willing to invest in a security program? When a company insists on a quick and dirty fix?

The advice in this case is to go along with your customer’s wishes. They want to audit your code? Excellent, take the opportunity. A code audit, or, rather, they more likely mean “white box penetration testing” in this case, is an expensive effort. At a bare minimum, you are looking at about forty man-hours of skilled labor, or fifty thousand euro or dollar net expense. Are they willing to expend that kind of money for you? Great. Take them up on their offer.

Oh, of course, they will find all sorts of bugs, security holes and simply ugly code. Take it all in with thanks and fix it pronto. You will get a loyal customer and a reputation for, frankly, the work that you should have done from the beginning anyhow.

Now, the important thing is going to be the handling of those findings. Here many companies go wrong. It is not good enough for your business to just fix the reported problems. Studies show that the developers never learn. That means your next release will have all those problems in the code again.

To do things right, you must set a special task to your development or testing – they must find ways to discover those problems during the development cycle. They must be able to discover the types of problems that were reported to you before they can be detected by the customer. Then, your code will improve and you will be able to lower the effort to fix for the next time.

And there you are, a quick and dirty fix, as promised. Just don’t fall for the fallacy of thinking that you have security now. You don’t. To get proper security into your products and life cycle will take a different order of effort.

They never learn password security: Domino Pizza

Domino-PizzaFrance and Belgium Domino Pizza password database was stolen by the hackers of Rex Mundi. They require a 30,000 euro payment to avoid disclosure. Well, Domino Pizza went to police, so the 592,000 French and 58,000 Belgian customer records will be in the open tonight.

What is interesting though? This is 2014. Do you know what they used to store passwords? They used MD5 without salt or stretching. Like if the previous 20 years never happened in their computer universe. We keep reiterating the good ways of storing passwords over and over again and nobody listens.

Domino said in a statement: ‘The security of customer information is very important to us. We regularly test our UK website for penetration as part of the ongoing rigorous checks and continual routine maintenance of our online operations.’

I can feel sympathy to the challenges of securing a large network where you have to get many things right. If you only do penetration and ignore all the other things, you will end up with a false sense of security. Judging by them not knowing how to store the passwords, that’s exactly what happened. Penetration testing can supplement but never replace an all-encompassing security program.

The only upside is that Domino apparently stored credit card details separately and the financial data has not fallen into hackers’ hands. So, that’s good.

domino-pizza-ransom

More: at The Guardian, Daily Mail, The Register.

TrueCrypt disappears

truecryptQuite abruptly, the TrueCrypt disk encryption tool is no more. The announcement says that the tool is no longer secure and should not be used. The website provides a heavily modified version of TrueCrypt (7.2) that allows one to decrypt the data and export it from a TrueCrypt volume.

Many questions are asked around what actually happened and why, the speculation is rampant. Unfortunately, there does not seem to be any explanation forthcoming from the developers. For the moment, it is best to assume the worst.

My advice would be to not download the latest version, 7.2. Stick to whatever version you are using now if you are using TrueCrypt at all and look for alternatives (although I do not know any other cross-platform portable storage container tools). If you are with 7.1a, the version is still undergoing an independent audit and you may be well advised to wait for the final results.

More on the subject:

Update: there is a Swiss website trucrypt.ch that promises to keep TrueCrypt alive. At the moment, most importantly, they have the full collection of versions of TrueCrypt and all of the source code. There will probably be a fork of TrueCrypt later on.

Over-engineering

Causes for security problems are legion. One of the high pertinence problems in software development is called “over-engineering” – creation of over-complicated design or over-complicated code not justified by the complexity of the task at hand.

Often it comes as a result of the designer’s desire to show off, to demonstrate the knowledge of all possible methods, patterns and tricks. Most of the time it impresses only people who are not deeply involved with the subject. Usually it annoys people that know software design and the subject matter. Slightly more often than always it results in overly complicated code full of security holes.

XKCD on over-engineering

Of course, over-engineering is nothing new and even in the old times this was a popular thing. The problem is, where the old mechanical contraptions were ridiculous and did not really quite work in the real life, the contemporary computer software, although built in about the same way, manages somehow to struggle through its tasks. The electronics and software design industry are the most impacted by this illness that became an epidemic. Interestingly, this is one aspect where open source software is no different from commercial software – the open source designers also try to impress their peers with design and code. That is how things are.

The over-engineering in software is not just big, it is omnipresent and omnipotent. The over-engineering disease has captured the software industry even more completely than the advertising – the broadcasting. The results are predictably sad and can be seen with an untrained eye. Morbidly obese code is generated by morbidly obese development tools leading to a simply astonishing myriad of bugs in software that fails to perform its most apparent task and written with so much effort that writing the original in assembler would have been faster.

The implications are, of course, clear: the complexity is bad for security and the bugs are bad for security. Mind you, the two interplay in ways that exacerbate the problem. To top it off, the software becomes unmaintainable in a number of ways, including the total impossibility of predicting the impact of even small changes to the overall system.

The correctness of the software implementation is hard to judge on non-trivial amounts of code. Software is one of the most complicated things ever invented by mankind. So when the software gets more complex, we lose oversight and understanding of its inner working. On large systems, it is nowadays usual for no one to have a complete overview of how the system works. Over-engineered software has much more functionality than required, complicating the analysis and increasing the attack space. It tends to have functions that allow complex manipulation of parameters and tons of side effects. Compare making a reaping hook and a combine-harvester from the point of view of not harming any gophers.

Bugs proliferate in over-engineered software both for the reason of complexity and sheer size of the code, which go hand in hand. We know by now that there are more bugs in higher complexity software. There is direct correlation between the size of the code, complexity of the code, and the amount of bugs potentially present there. Some of those bugs are going to be relevant for security. The bad news is that quite often what cannot be achieved by an attacker through using a single bug, can be achieved through using a combination of bugs. A bug in one place could bring a system to an inconsistent state that could allow an attack somewhere else. And the more complicated the software becomes, the more “interesting” combinations of bugs there are. Especially when the software is over-engineered it allows for a much wider range of functionality to be accessed through security bugs.

As for the maintainability, an over-engineered system becomes harder and harder to maintain because it is not a straightforward implementation of a single concept. Anyone trying to update such software would have a hard time understanding the design and all implications of the necessary changes. I came across a serious problem once, where a developer had to inherit a class (in Java) to add additional security checks into one of the methods. By doing so, he actually ruined the complete access control system. That was unintended, of course, but the system was so complicated it was not apparent until it turned out in penetration testing that one could now send any administrative command to the server without authenticating and the server would happily comply. When the problem was discovered, it became apparent in hindsight, of course. However, the system is so complex, that it requires non-trivial amounts of effort to analyze impacts of changes. Any small change can easily turn into a disaster again.

The complex, non-maintainable code riddled with bugs becomes a security nightmare. The analysis of over-engineered software is not only expensive, it becomes sometimes simply impossible. Who would want to spend the next hundred years to analyze and fix a piece of software? But that is what I had to estimate for some of the systems that I came across. For such a system, there is no cure.

So, the next time you develop a system, follow the good old KISS principle: Keep It Simple, Stupid. If it worked for Kelly Johnson, it will work for you, too. When you maintain the code and make changes, try to bring the code size down and reduce functionality. Code size and complexity are directly correlated, so decreasing the KLOC count is a good indicator.

Software Security vs. Food Safety

My friend works in a large restaurant chain in St-Petersburg. She is pretty high up in the command after all these years. We talk about all sorts of things when we meet up and once she told me about how they have to deal with safety and quality inspections and how bothersome and expensive they are. So I teased her: why don’t they just pay off the officials to get a certificate instead of going through all the trouble? And she answered seriously that that was only good for a fly-by-night business that does not care about clients or reputation.

Food Poisoning
Food-poisoning a customer would have severe consequences. Their chain has been around for more than ten years, she said, and they do not want to risk any accidents to destroy their reputation and client base. In the long run, she said, they are better off establishing the right procedures and enduring the audits that will help them to protect the health and safety of their clients. And she named three kinds of losses that they are working hard to prevent: direct losses from accidents, loss of customers as the result of a scare and a long-term loss of customer base as the result of reputation and trust decay.

I think there is something we could learn. The software industry has become completely careless in the recent years and the protection of the customer is something so far down the to-do list you can’t even see it. Judging by the customer care, most businesses in the software industry are fly-by-night. And if they are not, what makes them behave like if they are? Is there some misunderstanding in the industry that the security problems do not cause costs, perhaps? Evidently, the companies in the software industry do not care about moral values but let us see how the same three kinds of losses apply. Maybe I can convince you to rethink the importance of customer care for the industry?

Sony PlayStation Network had an annual revenue of $500 million in 2011, with about 30% margin, giving them a healthy $150 million in profit a year. That year, their network was broken into and hackers stole thousands of personal records, including credit card numbers. In the two weeks of the accident Sony has lost about $20 million, or 13% of their profit. When the damage compensations of $171 million kicked in, the total shot to about $191 million, making Sony PlayStation Network lose over $40 million that year. Some analysts say that the long-term damages to the company could run as high as $20 billion. How would you like to work for 40 years just to repay a single security accident? And Sony is a large company, they could take the hit. A smaller company might have keeled over.

security-2014-01-31-01

And these kinds of things can come completely unexpected from all sorts of security accidents. Thanks to the governments’ pressure we hear about companies suffering financial disadvantage from incidents that used to be ignored. The US Department of Health & Human Services has fined Massachusets Eye & Ear $1.5 million for the loss of a single laptop that contained unencrypted information. “Oops” indeed. The same year UK Information Commissioner’s Office fined Welcome Financial Services Ltd. &150,000 for the loss of two backup tapes. Things are heating up.

Now, the Sony PlayStation Network breach did not only cost Sony money. The company Brand Index, specializing on measuring the company image in the game circles, determined that that year Sony PlayStation image became negative for the first time in the company’s history. The gamers actively disliked the Sony brand after the accident. That was enough to relegate Sony from the position of a leader in the gaming industry to “just a member of the pack”.

More interesting tendencies could be seen in the retail industry. TJX, the company operating several large retail chains, suffered a breach back in 2005, when hackers got away with 45 million credit card records. At that time, the analysts were predicting large losses of sales that never materialized. TJX paid $10 million in settlements of charges and promotion and the sales did not dip.

Fast forward to December 2013, now Target suffers a security breach where 70 million customer records and 40 million credit card numbers are stolen. Target did not appear to be too worried and engaged in the familiar promotion and discount offering tactics. And then the inconceivable happened. The customers actually paid attention and walked away. The total holiday spending dropped by 9.4%, sales were down 1.5% despite 10% discounts and free credit monitoring offerings from the company. As the result, the company’s stock dropped 1.8%. In the scale of Target, we are talking about billions upon billions of dollars.

security-2014-01-31-02

So, what happened? In 2005, the industry worried but customers did not react. In 2013, the industry habitually did not worry, but customers took notice. Things are changing, even in the market and industry where software security was never of any interest to either shops or customers. People are starting to pay attention.

Now, if we talk about customer trust and industry image, the food industry serves as a pretty good role model. They have a lot of experience stretching back hundreds of years, they must have figured out a few useful things we could think of applying to our software industry. Take the dioxin scare of 2011. The tracing abilities of the food industry allowed them to find easily how the industrial oil got into animal feed and traced it to particular farms. Right away, the chickens and pigs were mercilessly culled at those farms. That’s what we call an accident response all right. In the aftermath, the food industry installed a regulation to require physical separation of industrial and food oil production and created a requirement for the labs to publish Dioxins findings in food samples immediately.

The food industry has learned that they will not be perceived well if they kill their customers. They are making an effort to establish long-term trust. That’s why they have good traceability, they are merciless in their accident response and they quickly establish new rules that help to improve customer confidence. Take the story of the horse meat fraud in 2013, where horse meat was sold as beef across Europe. That was not dangerous for health, that was a fraud to sell cheaper meat instead of more expensive. The food industry traced it all back to origin and found out that the liability for this kind of fraud was insufficient. That even after paying the fines the companies that engaged in this fraud were making a handsome profit. But customer confidence suffered immensely. And the industry took a swift action, the proposal to increase the penalties and take tougher measures was already accepted by the European Parliament on the 14th of January.

What can we learn from the food industry? They have great traceability of products, detection of all sorts of misbehavior and dangerous agents, requirements to publish data. The penalties are kept higher than potential gain and the response is swift and merciless: either recall or destruction of contaminated goods. All of this taken together helps the industry to keep their customers’ trust.

Try to imagine that HTC was required to recall and destroy all those millions of mobile phones that were found to have multiple security vulnerabilities in 2012. Well, HTC did not waltz away easily as happened in so many cases before. They had to patch up those millions of mobile phones, pass an independent security audit every two years, and, perhaps most telling, they are obliged to tell truth and nothing but the truth when it comes to security.

And this kind of thing will happen more and more often. The customers and governments take interest in security, they notice when something goes wrong and we have a big problem on our hands now, each company individually and the industry as a whole. We will get more fines, more orders to fix things, more new rules imposed and so on. And you know what? It will all go fast, because we always claim that software is fast, it is fast to produce software, make new technology, the innovation pace and all that. People and organizations are used to thinking about he software industry as being fast. So we will not get much advanced notice. We will just get hit with requirements to fix things and fix them immediately. I think it would do us good to actually take some initiative and start changing things ourselves at the pace that is comfortable to ourselves.

Or do we want to sit around and wait the crisis to break out?

Fraud Botnet Controls Sales Terminals

Ah, the humanity. ArsTechnica reports that researchers came across a proper botnet that controls 31 Point Of Sales (POS) servers with an unknown number of actual sales terminals connected to them. The botnet is operational, i.e., it is running and collecting the credit card data. The data is transmitted during idle times in an encrypted form to the command center. The software running the botnet is apparently available for sale worldwide in the black market. There is another report by Arbor Netowrks that follows the widespread attack campaign mostly in Asia. So much for credit card security…

Planning a redesign and a move…

Web DesignerI have a confession to make. I am planning to redesign the website. I am toying with several ideas on what things should look like. I plan to provide more information for non-specialists and even non-developers of software. You probably would agree that the website is rather intimidating to the uninitiated at the moment. I want it to be a little friendlier, so perhaps a division into chapters or separate catalogues of posts would be right. I am going to try out a few things and keep whatever works.

Anyway, that is all a little way off, likely at the beginning of the next year. In order to do the redesign though, I will have to move out of wordpress.com. The platform is wonderful, easy to use and worry free. The only problem is, you cannot really customize it all you want. For that, one needs own hosting.

So, the first step is likely to happen some time in January. I will move holyhash.com to own hosting. I will keep all of you who subscribed to updates posted (and thank you for subscribing, I really appreciate your interest). I will announce the move and I will let you all know when it is done. The website name will remain but the internals will be slightly different. Stay tuned.

Post Navigation

Follow

Get every new post delivered to your Inbox.

Join 112 other followers

%d bloggers like this: