Should Technology be at the Heart of IT?

When working with senior IT management a while back an example served to highlight different approaches to managing the IT function, or at least sub-components of it. What appeared obvious is the influence of the background and experience of management as people tend to play to their strengths. The example was a discussion around a new position that had been created in the IT governance team relating to process and ITIL.
 
What struck me about the approach was that the technology was already embedded, and the person was being recruited to manage the technology and build process around it. In other words technology was at the core of the business objective with administrative and procedural components being mapped around it. Is this the right way to approach IT management? The obvious risk is that this may render the situation at the mercy of the given technology, and should the technology be replaced within the organisation this will require a refocus on skills and possibly infrastructure or other components radiating outwards. Documentation and other sound governance elements will also require a revisit.
I don’t believe this model to be entirely wrong – there may be instances where building a function around a technology are advantageous, especially when core business objectives revolve around the technology.
 
From a maturity, reuse and risk management perspective I believe the process should be at the core of the function with technology acting as an enabler, especially given the current rate of change. I mention risk because in the Information Security space this is analogous to the development of an InfoSec Policy, which starts at a sufficiently high level to be technology agnostic and then to plug in current and future technology at lower-level standards and sub-policies. If technology comes before process then the latter constantly has to be updated as the technology matures.
 
So, is the decision to place technology or process at the centre due to the skills/experience of management driven by those more in touch with the technology (or those who shout the loudest), and is it destined to be tactical rather than strategic?
Advertisements

Information Class-ed-ification

A poll of information security practitioners might suggest that Information Classification is a task that we all talk about, but that is operationally not feasible in highly complex environments. Based on the apparent practical difficulties in implementing such a policy, it is not uncommon for organisations to try work around this, leaving the draft document to gather dust.

Some of the challenges include:

• Getting people in the business (that understand the information and understand the risks to the information) together to classify information.

• Forcing the myriad of business information into categories such as Classified, Public, Internal, or other.

• Identifying and tagging information based on classification.

• Monitoring a control environment that spans systems, physical locations and nearly every nook and cranny of a business.

I tend to disagree with this approach. The classification of information (i.e. one of the organisation’s key assets) is a fundamental step in determining the risks related to information, and determines the types and levels of control that need to be implemented to adequately protect the information. Everything else hinges off understanding this principle, from implementing layered security, pulling this together into a logical architecture, to preparing for future threats in our changing landscape. If an organisation has a clear understanding of what information they have, who uses it, where it is stored and processed, and what its value is, then the control environment fits properly over and around the people, processes and systems that manage the information. New threats are then exactly that – threats that present different attack vectors that can be easier identified and lend themselves to a (more) quantifiable assessment of risk.

Without a proper information classification process, the following risks become apparent:

• Unsustainable controls: There could be a mismatch between the strength of the controls and the value of the information. Stronger controls require more resources and cost more, so this could mean security budgets are misdirected, or highly sensitive information is not adequately protected.

o In the above scenario what could also happen is too much information is pushed into the ‘highly sensitive’ category which requires these stronger controls. Over time these will become unsustainable and could deteriorate. For example if all databases were in this classification, then all system, application and database administrators may require the top level of access (we know they don’t, but they are sometimes very good at justifying they do!). This results in no segregation between types of information which is pointless. You may as well have few controls that allow the same level of access (which still present a high level of risk of course).

• Silos of controls: Regulation is hitting businesses from all angles. Due to time pressures (or pressures from different parts of the business – legal, clients, COOs) a bottom up approach to plugging the gaps might be forced. This could result in a silo’ed approach to implementing controls. The net result could be layering controls over the same type of information, no single view of what is happening (making monitoring difficult), and in general simply resource wastage.

I would take this problem a step further. In many cases it is difficult to define and implement a security policy without a clear indication of what the business is trying to achieve in the relevant area. Social media is a good example – how you best manage the associated risks will largely depend on the business drivers and strategic objectives relating to social media and networking. So to provide context to an Information Classification policy, there should be an overarching information management strategy. The organisation needs to 1st define the what, why, who, how, when, where’s of information as well as other principles (avoiding duplication, making information accessible to the right people at the right time, etc), before they can determine how best to secure this information. How many organisations have such a policy or strategy document?

As a starting point – rather than trying to classify information based on sensitivity (as per a typical information classification policy), rather identify the information first based on such categories below:

• Transactional

• Employee

• Client Information

• Business strategy

• Marketing

• Financial (reports)

• Risk issues and controls (e.g. audit reports, incidents)

Then devise a set of controls that can be mapped to these types of information. If the correct controls are designed and implemented then the most sensitive information will naturally have stronger controls in place. This top down approach can help with regulatory requirements. I.e. one can define requirements for PCI, POPI, NDA etc, etc and then apply these requirements to the ‘client information’ set (or whatever the case may be).

My last point on this is that I believe the top-down approach lends itself to the collaborative risk management and assurance approach as there is always a ‘big picture’ to start with, and the goals and objectives of each team (InfoSec, Operational Risk, Audit, Privacy, etc) become clearer. Reporting on this overall process will be easier and more tangible, increasing your chances of board-room understanding and buy-in.

The price of 24/7 uptime?

Many sectors of the economy, especially those supported by technology (ecommerce or other) are under pressure to increase service delivery. The proliferation of access and mobility means clients want to access their products/accounts/services when they want at any time of day or night.

100% (expected) uptime has an impact on operational tasks – in the past it was ok to take systems down during planned maintenance time frames to reboot, resolve issues, install patches, etc. How does an organisation continue to perform essential maintenance when clients could be accessing the systems at any time? Technology solutions are available, and more accessible in the redundant and virtual world. However the pressure still remains, which could have an impact on the governance of operational tasks such as change management. In order to keep a system up, one may feel the need to reduce the level or depth of testing prior to implementing a change. This should be a key area of focus for those monitoring or auditing the system development and change management processes.

What are the symptoms of such a scenario? We could analyse issues and incidents that affect a system and see if the root cause is related to changes. For example a high number of emergency changes right after a scheduled change may indicate the planned changes were not adequately designed or tested. Requiring staff to work longer hours to resolve issues is one thing. But when the client experience is affected, this is more serious. We will wait to see if further details arise, but it could be that issues experienced this weekend by the National Australia Bank could be related to this pressure:

http://www.news24.com/World/News/No-cash-after-computer-crash-20101127

Combined Assurance, or just collaboration?

In South Africa the King III Report and Code was released just over a year ago. The Code talks to corporate governance and in this latest release places an emphasis on IT governance (which is good – we’ll come back to this later).
Another catch phrase to gain prominence from the Report is ‘Combined Assurance’. The concept isn’t new, but in these economic times when efficiencies are sought to manage costs, it is not surprising that the role of the assurance providers are being scrutinised. Those that don’t know better may feel that the likes of Internal Audit, Risk (Operational, Market, Credit, etc) and Compliance are fulfilling very similar functions, or at least appear to do so from an execution perspective. If business perceives this to be the case then they are most certainly going to ask more questions about alignment and avoiding duplication. Enter Combined Assurance. This is the phrase business has been looking for to tell the assurance providers that it is time to spend more time talking to each other and less time talking to their hard-working staff. Correct on one level, but risky on another. This example is oversimplified of course – I am  setting the stage for future discussions on the topic to explore it from various angles.
From a pure risk management perspective the idea of Combined Assurance makes sense – collaboration is key to improving the effectiveness of a risk management model enabled by a clear definition of responsibilities, dependencies and relationships. In the IT space the concept a multi-layered line of defence (or ‘defence-in-depth’) strategy has been around for some time. The objective is less around efficiency and more around ensuring that in the event of a control failure, or if something falls between the cracks, then another control will be in place. This is vital when dealing with a constantly evolving threat landscape although may not directly address the issue of independence.
A risk is that too much focus on efficiency may ignore effectiveness. By concentrating on how better to avoid duplication something may be missed, or we an additional layer or monitoring could be removed that really needs to be there. Perception may be the only thing that needs to change, or not. But this topic is certainly worth exploring. Business asking questions should not be the only driver – it makes logical business sense.