News and another great quote – Breached (again?!)

If I kept a running commentary of all the system, service and data breaches currently being disclosed this blog would probably look like it was scrolling in real time. Thankfully a bunch of other sites do a great job of keeping us up to date on the somewhat gloomy happenings across the Internet.

The recent DNS attacks are of particular interest, and concern. DNS is part of the fabric of the Internet, and without it many people’s (click-and-mortar) businesses and livelihoods could come to an abrupt halt. In this case it was large corporations targeted, but it is easy to see smaller home-based companies suffering collateral damage.

It sometimes feels like we have built our Internet/E-commerce house on sand. What is more concerning is that the simple, well-known attacks (SQL injection in this case) are still highly effective. The DigiNotar incident audit report also puts fundamental security control failures at the root of the breach – log management, password controls, patches and network segmentation.

Why do we spend time worrying and analysing APTs and advanced cyber-crime techniques when we still can’t get the basics right?

Brian Honan summed it up well in his Editor comment on the SANS NewsBites email yesterday (Brian I hope you don’t mind me quoting you!):

“This (DNS) attack and the one on DigiNotar highlight how fragile, insecure and unsuitable the Internet is for conducting the type of transactions we are using it for.  Putting security solutions as add-ons to the infrastructure is not working.  We need a fundamental rebuild of the security architecture we are using and we need it now! ” (I like the irony of posting this link)


The price of 24/7 uptime?

Many sectors of the economy, especially those supported by technology (ecommerce or other) are under pressure to increase service delivery. The proliferation of access and mobility means clients want to access their products/accounts/services when they want at any time of day or night.

100% (expected) uptime has an impact on operational tasks – in the past it was ok to take systems down during planned maintenance time frames to reboot, resolve issues, install patches, etc. How does an organisation continue to perform essential maintenance when clients could be accessing the systems at any time? Technology solutions are available, and more accessible in the redundant and virtual world. However the pressure still remains, which could have an impact on the governance of operational tasks such as change management. In order to keep a system up, one may feel the need to reduce the level or depth of testing prior to implementing a change. This should be a key area of focus for those monitoring or auditing the system development and change management processes.

What are the symptoms of such a scenario? We could analyse issues and incidents that affect a system and see if the root cause is related to changes. For example a high number of emergency changes right after a scheduled change may indicate the planned changes were not adequately designed or tested. Requiring staff to work longer hours to resolve issues is one thing. But when the client experience is affected, this is more serious. We will wait to see if further details arise, but it could be that issues experienced this weekend by the National Australia Bank could be related to this pressure:

NEWS: Facebook on the brink of trademarking the word ‘Face’

Looks like its to help legal battles with other companies that use the words ‘Face’ or ‘Book’. What makes this trademark different is the word is a common English word unlike other global companies that have gone the same way. Some details:

I wonder if this will affect other companies that have based their business on the social networking site (e.g. google ‘facebook proxy’)?

NEWS: UK ICO delivers fines for data breaches

A ‘breakthrough’ of sorts in the UK – earlier this year the Information Commissioners Office in the UK was granted the power to fine organisations that failed to adequately protect customer data and where breaches took place. Two organisations, the Hertfordshire County Council and A4e, were given fines for breaches.
The Council was found to be guilty of sending sensitive information to the incorrect people via fax, which is human error – I wonder how many times this happens and goes unnoticed?
A4e suffered the more common fate of losing an unencrypted laptop with personal information of 24000 people, which is a failed operational control, and human error – why were the records on the laptop in the 1st place?

ComputerWeekly have a good review on the story:—but-h.html

The feeling is that while the fines are a good move, the amounts dont relate to the severity of the breaches. The fines were GBP100k and GBP60k which do seem more like a slap on the wrists. For me it is a good start although if investigations are not handled properly and companies feel they are fined unfairly, this may result in fewer companies disclosing data breach incidents.