Many sectors of the economy, especially those supported by technology (ecommerce or other) are under pressure to increase service delivery. The proliferation of access and mobility means clients want to access their products/accounts/services when they want at any time of day or night.
100% (expected) uptime has an impact on operational tasks – in the past it was ok to take systems down during planned maintenance time frames to reboot, resolve issues, install patches, etc. How does an organisation continue to perform essential maintenance when clients could be accessing the systems at any time? Technology solutions are available, and more accessible in the redundant and virtual world. However the pressure still remains, which could have an impact on the governance of operational tasks such as change management. In order to keep a system up, one may feel the need to reduce the level or depth of testing prior to implementing a change. This should be a key area of focus for those monitoring or auditing the system development and change management processes.
What are the symptoms of such a scenario? We could analyse issues and incidents that affect a system and see if the root cause is related to changes. For example a high number of emergency changes right after a scheduled change may indicate the planned changes were not adequately designed or tested. Requiring staff to work longer hours to resolve issues is one thing. But when the client experience is affected, this is more serious. We will wait to see if further details arise, but it could be that issues experienced this weekend by the National Australia Bank could be related to this pressure:
Looks like its to help legal battles with other companies that use the words ‘Face’ or ‘Book’. What makes this trademark different is the word is a common English word unlike other global companies that have gone the same way. Some details:
I wonder if this will affect other companies that have based their business on the social networking site (e.g. google ‘facebook proxy’)?
A ‘breakthrough’ of sorts in the UK – earlier this year the Information Commissioners Office in the UK was granted the power to fine organisations that failed to adequately protect customer data and where breaches took place. Two organisations, the Hertfordshire County Council and A4e, were given fines for breaches.
The Council was found to be guilty of sending sensitive information to the incorrect people via fax, which is human error – I wonder how many times this happens and goes unnoticed?
A4e suffered the more common fate of losing an unencrypted laptop with personal information of 24000 people, which is a failed operational control, and human error – why were the records on the laptop in the 1st place?
ComputerWeekly have a good review on the story:
The feeling is that while the fines are a good move, the amounts dont relate to the severity of the breaches. The fines were GBP100k and GBP60k which do seem more like a slap on the wrists. For me it is a good start although if investigations are not handled properly and companies feel they are fined unfairly, this may result in fewer companies disclosing data breach incidents.
In South Africa the King III Report and Code
was released just over a year ago. The Code talks to corporate governance and in this latest release places an emphasis on IT governance (which is good – we’ll come back to this later).
Another catch phrase to gain prominence from the Report is ‘Combined Assurance
’. The concept isn’t new, but in these economic times when efficiencies are sought to manage costs, it is not surprising that the role of the assurance providers are being scrutinised. Those that don’t know better may feel that the likes of Internal Audit, Risk (Operational, Market, Credit, etc) and Compliance are fulfilling very similar functions, or at least appear to do so from an execution perspective. If business perceives this to be the case then they are most certainly going to ask more questions about alignment and avoiding duplication. Enter Combined Assurance. This is the phrase business has been looking for to tell the assurance providers that it is time to spend more time talking to each other and less time talking to their hard-working staff. Correct on one level, but risky on another. This example is oversimplified of course – I am setting the stage for future discussions on the topic to explore it from various angles.
From a pure risk management perspective the idea of Combined Assurance makes sense – collaboration is key to improving the effectiveness of a risk management model enabled by a clear definition of responsibilities, dependencies and relationships. In the IT space the concept a multi-layered line of defence (or ‘defence-in-depth’) strategy has been around for some time. The objective is less around efficiency and more around ensuring that in the event of a control failure, or if something falls between the cracks, then another control will be in place. This is vital when dealing with a constantly evolving threat landscape although may not directly address the issue of independence.
A risk is that too much focus on efficiency may ignore effectiveness. By concentrating on how better to avoid duplication something may be missed, or we an additional layer or monitoring could be removed that really needs to be there. Perception may be the only thing that needs to change, or not. But this topic is certainly worth exploring. Business asking questions should not be the only driver – it makes logical business sense.
Welcome to my blog. I am an information security and IT risk management professional working in the financial services sector. Currently I am in IT Audit and have previously worked in InfoSec and Security Engineering.