Information Security 101 (#sec101)

Coincidently, this is a theme common to some of my previous posts. I believe it is a sign of the times – that as we continue to experience data breaches we find fundamental control failures are behind many of them, which is what prompted me to write my previous posts.

October is ‘Cyber Security Awareness Month’ over at the SANS ISC diary page.

Tom Liston has put together a post highlighting the concern mentioned above, and then in social networking style opened up the floor to the Twitter universe to see what we thought were some of the fundamental security basics the community (in general) needs a reminder about.

It’s a great little summary with real life context that is definitely worth a read. The post is at:

http://isc.sans.edu/diary/Security+101+Security+Basics+in+140+Characters+Or+Less/11725

Three suggestions I put forward (admittedly I was a little late for 2 of them), speak to where my thoughts and concerns are:

1. Writing a Policy & not implementing/monitoring doesnt constitute a control. Thats like buying the firewall and leaving it in the box

2. As pessimistic as it sounds, ‘TRUST’ is not a reliable information security model

3. Security teams that work in isolation and without transparency will fail. Collaborate with other risk mgmt – audit, ops risk, etc

There are plenty of great contributions on the site. Putting forward suggestions was a great excerise as it forces you to think in (very) succinct terms of key controls and basic security principles.

This content should be part of a training programme somewhere…

Advertisements

Information Class-ed-ification

A poll of information security practitioners might suggest that Information Classification is a task that we all talk about, but that is operationally not feasible in highly complex environments. Based on the apparent practical difficulties in implementing such a policy, it is not uncommon for organisations to try work around this, leaving the draft document to gather dust.

Some of the challenges include:

• Getting people in the business (that understand the information and understand the risks to the information) together to classify information.

• Forcing the myriad of business information into categories such as Classified, Public, Internal, or other.

• Identifying and tagging information based on classification.

• Monitoring a control environment that spans systems, physical locations and nearly every nook and cranny of a business.

I tend to disagree with this approach. The classification of information (i.e. one of the organisation’s key assets) is a fundamental step in determining the risks related to information, and determines the types and levels of control that need to be implemented to adequately protect the information. Everything else hinges off understanding this principle, from implementing layered security, pulling this together into a logical architecture, to preparing for future threats in our changing landscape. If an organisation has a clear understanding of what information they have, who uses it, where it is stored and processed, and what its value is, then the control environment fits properly over and around the people, processes and systems that manage the information. New threats are then exactly that – threats that present different attack vectors that can be easier identified and lend themselves to a (more) quantifiable assessment of risk.

Without a proper information classification process, the following risks become apparent:

• Unsustainable controls: There could be a mismatch between the strength of the controls and the value of the information. Stronger controls require more resources and cost more, so this could mean security budgets are misdirected, or highly sensitive information is not adequately protected.

o In the above scenario what could also happen is too much information is pushed into the ‘highly sensitive’ category which requires these stronger controls. Over time these will become unsustainable and could deteriorate. For example if all databases were in this classification, then all system, application and database administrators may require the top level of access (we know they don’t, but they are sometimes very good at justifying they do!). This results in no segregation between types of information which is pointless. You may as well have few controls that allow the same level of access (which still present a high level of risk of course).

• Silos of controls: Regulation is hitting businesses from all angles. Due to time pressures (or pressures from different parts of the business – legal, clients, COOs) a bottom up approach to plugging the gaps might be forced. This could result in a silo’ed approach to implementing controls. The net result could be layering controls over the same type of information, no single view of what is happening (making monitoring difficult), and in general simply resource wastage.

I would take this problem a step further. In many cases it is difficult to define and implement a security policy without a clear indication of what the business is trying to achieve in the relevant area. Social media is a good example – how you best manage the associated risks will largely depend on the business drivers and strategic objectives relating to social media and networking. So to provide context to an Information Classification policy, there should be an overarching information management strategy. The organisation needs to 1st define the what, why, who, how, when, where’s of information as well as other principles (avoiding duplication, making information accessible to the right people at the right time, etc), before they can determine how best to secure this information. How many organisations have such a policy or strategy document?

As a starting point – rather than trying to classify information based on sensitivity (as per a typical information classification policy), rather identify the information first based on such categories below:

• Transactional

• Employee

• Client Information

• Business strategy

• Marketing

• Financial (reports)

• Risk issues and controls (e.g. audit reports, incidents)

Then devise a set of controls that can be mapped to these types of information. If the correct controls are designed and implemented then the most sensitive information will naturally have stronger controls in place. This top down approach can help with regulatory requirements. I.e. one can define requirements for PCI, POPI, NDA etc, etc and then apply these requirements to the ‘client information’ set (or whatever the case may be).

My last point on this is that I believe the top-down approach lends itself to the collaborative risk management and assurance approach as there is always a ‘big picture’ to start with, and the goals and objectives of each team (InfoSec, Operational Risk, Audit, Privacy, etc) become clearer. Reporting on this overall process will be easier and more tangible, increasing your chances of board-room understanding and buy-in.

Seven Habits of Highly Effective Risk Management

I’ve been giving some thought to what makes a good risk management function. What follows is a summary of the 7 key attributes or processes I settled on, in no particular order. It is worth noting that this is not specific to IT or information security, but any environment where risk needs to be managed.

1. Objectivity

Be firm but fair. Always exercise professional scepticism when evaluating the effectiveness of a control environment. Just because someone says something works doesn’t necessarily mean it works as it should. Independence in also important – those too close to a process are not always in the best position to identify risks.

2. Risk assessment and reassessment

People, processes, the environment, technology, everything and anything changes. Consider the rate of change and other factors (such as incidents) and define a formal plan for revisiting the risk assessment phase. Check that your attributes, ratings and area of assessment are still valid.

3. Refinement of controls

In conjunction with #2 revisit controls and associated processes to validate these are effective and efficient. Even if the environment (and risk) hasn’t changed, there may be a better way of managing the risk such as with a new technology. Using maturity models can help to track and measure changes in the effectiveness of controls. Although monitoring controls is a fundamental part of risk management (as in the prescribed Plan-Do-Check-Act methodology), this does not constitute independent review. Make sure your audit team has a look at the control environment to give their view.

4. Collaboration with other risk management functions

Work environments can be very complex. Many systems, processes, laws, regulations and other attributes combine to keep a business working. It is nearly impossible for a single division to have 1) the knowledge required to understand all risks, and 2) the visibility of all components of the business. Even those teams with similar skill sets (such as IT security and IT audit) have different objectives and so need to work together to help the business manage risk effectively. Rather than working in isolation there is more value in assurance providers and risk managers working together to ensure appropriate lines of defence are in place while avoiding duplication of efforts (i.e. combined assurance).   

5. Awareness and training

The possible high rates of change in an environment (systems, business processes, risks, technologies or regulation) pose a challenge to those that are responsible for identifying and managing risks. Continued learning and awareness are fundamental requirements to obtain and sustain the necessary knowledge of the environment and associated risks. At the same time we are challenged by having too much information easily available, so it is important to filter out what is relevant and to identify trustworthy sources. Engaging with other professionals within this pretext is a great way of keeping up to date with changes relevant to your environment.

6. Collaboration with business

Apart from making the work place more pleasant, there is value in building a trusted relationship between risk managers/assurance providers and business stakeholders. The more these parties engage the more opportunity there is for knowledge transfer. The more a risk manager knows about the environment in which they operate the more effective they can be and the more value they can demonstrate.

7. Fairness

Most (if not all) divisions have the business’ best interests at heart, including risk functions. A risk manager’s job is not to always err on the side of caution and drive the control environment to such an extent that all risks are fully mitigated as this hinders the business’ ability to operate or perform successfully. Many businesses need to take risks to make money, and it is the responsibility of risk functions to allow them to do so while managing risks to an acceptable level. Managing risks is not always about fully mitigating them.

Pubcast: 2011 Predictions

A couple of weeks ago Tony Olivier invited me to join some top security experts on a podcast to discuss InfoSec trends for 2011 in South Africa. Being my first podcast experience I didnt have much to say, but I found it to be an extremely insightful and interesting discussion ranging from compliance to technology to foursquare. To Helaine, Kris, Craig, Haroon, Matt and Tony – well done guys!

If you have the time do head to DiscussIT.co.za and have a listen.

http://www.discussit.co.za/index.php?option=com_content&task=view&id=283&Itemid=1