Trust, in Security

Definition of Trust: “firm belief in the reliability, truth, or ability of someone or something”

Synonyms: “belief, confidence, faith, certainty”

When it comes to information security and risk management, trust is often something easily disregarded – “trust is not a control”, “in God we trust, everyone else we audit”. This is because it is easier to work with hard and fast rules – binary options which have been the mainstay of enterprise security since enterprise security was a thing. To this day we still rely heavily on this approach, e.g.:

  • Authentication (are you who you say you are – yes or no?),
  • Firewalls (block or allow),
  • Anti-virus (does this string match a known malicious signature – yes or no?), and
  • Authorisation (does your role permit you to access this – yes or no?).

The problem is that humans don’t think or operate like this. We rely instead on instinct, gut feel and other equally un-binary mechanisms to make decisions. In the past this wasn’t really an issue, as information security was largely in the realm of IT and infrastructure, and so most humans didn’t notice this. However, today’s security is very visible to staff, third parties and clients, and therefore the controls we implement have a direct impact on them. And as humans we come with certain expectations about flexibility and trust (in ourselves).


Trust is a fundamental component of our innate security and risk management models. The algorithms we as humans execute to survive – through our senses, feelings and emotions – consider risk, and by extension trust, at the core. In other words, daily activities subconsciously execute algorithms to determine and improve the probability of survival. In doing so we are constantly evaluating levels of trust and making decisions accordingly based on both internal and external factors. These decisions are diverse, ranging from driving safely by assessing the trustworthiness of the road or your fellow drivers, to human interactions and specifically relationships which are fundamentally built on trust. The definition of trust uses words like ‘belief’, ‘reliability’ and ‘ability’ which highlight this is not a binary consideration. Trust is not ‘on’ or ‘off’, nor is it static. Rather, it is measured on an ongoing basis to various levels. It changes over time based on feedback.


Information security systems are already moving into this realm, for example:

  • Modern anti-malware uses behavioural analysis to detect suspicious software processes and activity rather than relying on signature-based detection,
  • User behavioural monitoring solutions detect and alert on suspicious behaviour such as data theft,
  • Interactive authentication solutions take into account how User Interfaces (e.g. keyboard) are used as an additional attribute to authenticate users, and
  • Confidence levels are applied to data classification and threat intelligence feeds that associate a level of trust vs. pure binary decisions.

By embracing these type of solutions, the concept of trust is put at the core of the security model. This approach is partially related to the concept (coined by Forrester) known as ‘zero trust security’, where the security model begins with no trust and then associates trust (and authorisation) based on feedback received from querying certain attributes. Outside of information security, this is actually how most things work – everything starts with zero trust (because as per the definition we can’t have any belief or confidence in something we don’t know anything about), but we very quickly build up a level of trust based on feedback. This process can be so quick that we don’t even realise we started at that point.


So the saying “trust is not a control” may be true, but that almost feels like it’s an answer to the wrong question. Trust is the fundamental part of control and can be heavily exploited using contextual and behavioural analysis to both improve user experience and increase security effectiveness. For many years we’ve approached security as a balance between security and usability – we need to think of these things together as one part of the challenge and focus efforts on extending our ability to mimic the millennia of evolutionary refinements in human algorithms to make our information security systems more effective. By adopting this approach we can execute control both more granularly (through feedback mechanisms that constantly monitor and adjust levels of trust) and in a structure that is less hierarchical and more aligned to living systems (e.g. not top down but relational and ad-hoc).

On the human side, Sumantra Ghoshal supports this idea in his speech “The smell of the place” (a very relevant and must-watch video if you are interested in organisational leadership, despite being over 15 years old) whereby one of the four comparisons he made between good and bad (modern and dated) organisational practices was “Contract vs. Trust”. Traditionally, organisations contract with employees to define roles, objectives and ultimately performance. On the other hand, trusting employees (in conjunction with support) is a more effective way of reaching better performance. This supports the idea that reasons not rules make us effective. This is a bit of a loose correlation, but it feels that the binary approach to people management in the workplace is also falling away, and the expectation of being able to trust people is increasing.

With regard to information security, how can we trust individuals to be responsible and do the right thing? By designing systems that allow responsible people to see less controls when operating with higher trust levels (authentication, information handling, location, behaviour, time and device security posture as some examples of attributes that can be considered). Further, these systems need to facilitate a constant feedback loop to cater for changes in trust as characteristics of attributes change, adjusting security accordingly. We are already well on our way in this journey and must guard against defaulting to the easy ‘yes/no’ option when posed with new security and business challenges.

In closing, nothing I’ve mentioned here is new or revolutionary as many systems already take advantage of Machine Learning and Artificial Intelligence capabilities to become ‘more human’, often referred to as the age of cognitive computing. I wrote this post to summarise some thoughts and test whether these assumptions hold true when written down.


Repost: list of good info sec people to follow

Repost from the ISGAfrica site, originally from Tripwire – a great selection of influential security practitioners to follow:


Protection of Information Bill and its Practicality relating to Information Classification

A bill was passed in South African parliament today [search Twitter for #POIB or #blacktuesday] which will effectively make it a criminal offence to possess and publish classified information (I wonder if that includes those who are responsible for managing it?). While it hasn’t become law just yet (the bill must still be approved next year), journalists are spelling the end of freedom of speech in the country, which is indeed a very concerning thought.

There are many legal, moral and ethical wars relating to this proposed law going on, but I wonder what the practical ramifications will be, and whether Government will get what they want, or the exact opposite?

Many organisations struggle with the process of identifying, classifying and securing information, so we can expect a government to have a far greater challenge at hand due to its complexity and sheer volume of information. Government processes and systems are often behind cutting edge blue-chip firms, so there is likely to be a wealth of physical, disparate and unstructured information to deal with. The easy choice (either from a KYA or practical perspective) is to deem all information as classified, and indeed I have heard this suggested in organisations too.

This is impractical as the basis for classifying information is to ensure that more sensitive and confidential information is better managed and controlled, and thus less likely to fall into the hands of those that shouldn’t have it. Making everything classified allows more people access to the information asset, and is likely to lead to unsustainable or expensive controls.

The net result in this scenario is that it could be harder to implement the Law as it is easier to get (and leak) the information, resulting in the opposite of what Government is trying to achieve. Throw social media and the relative anonymity of the Internet into the equation, and I struggle to see how this Law can be successful in muzzling those that wish to seek and share information that (insert your political or moral objective here). This should be an interesting item to watch.

As an aside, I am not for the Bill – the pragmatic view is that there will always be confidential information that should remain confidential outside a select few, but in the spirit of democracy and interests of a country you need to have avenues to expose information that citizens need in order to make decisions of future leadership accurately. As with any information classification process, one must ask ‘what is the value of the asset, and what risk are we trying to manage?’ (and perhaps in this case, who stands to benefit?)

LINK: Great blog post on security risk management

A post after my own heart. We need to take a step back and look at the bigger picture when it comes to risk management. What is important, what can go wrong, how can it go wrong, who can make it go wrong? Is it really important? What is the method, motivation and opportunity? Very good read!

Information Security 101 (#sec101)

Coincidently, this is a theme common to some of my previous posts. I believe it is a sign of the times – that as we continue to experience data breaches we find fundamental control failures are behind many of them, which is what prompted me to write my previous posts.

October is ‘Cyber Security Awareness Month’ over at the SANS ISC diary page.

Tom Liston has put together a post highlighting the concern mentioned above, and then in social networking style opened up the floor to the Twitter universe to see what we thought were some of the fundamental security basics the community (in general) needs a reminder about.

It’s a great little summary with real life context that is definitely worth a read. The post is at:

Three suggestions I put forward (admittedly I was a little late for 2 of them), speak to where my thoughts and concerns are:

1. Writing a Policy & not implementing/monitoring doesnt constitute a control. Thats like buying the firewall and leaving it in the box

2. As pessimistic as it sounds, ‘TRUST’ is not a reliable information security model

3. Security teams that work in isolation and without transparency will fail. Collaborate with other risk mgmt – audit, ops risk, etc

There are plenty of great contributions on the site. Putting forward suggestions was a great excerise as it forces you to think in (very) succinct terms of key controls and basic security principles.

This content should be part of a training programme somewhere…

Ramblings following on from last Data Breach post

Is there a relationship between the increase of breaches and hacks and the paradigm shift to outsourcing and cloud services? Logic suggests that if services are consolidated then these points of control should be more mature and better equipped to deal with issues, but is this reality or a mindset that leaves us vulnerable to simple attacks?

The connection may be very difficult to see, as the other key factor to consider is why the attacks are taking place. Let’s consider this 1st. My early InfoSec studies taught me to consider method, motivation and opportunity when assessing threat and risk of a given asset. If we consider what is happening across the ‘net these days it is clear that political drivers are behind certain attacks and hacks (motivation), and both complex and simple attack vectors (method) are used to achieve the intended result. However there are a bunch of other attacks taking place in which the motivation is questionable. I.e. in the recent DNS compromise why were UPS, The Register, National Geographic and Vodafone the targets? The only visible connection is that they use the same DNS provider, NetNames, which suggests the motivation was to disrupt DNS services of major online brands rather than the specific brands themselves. This is of course one possible explanation, although does seem the most likely if you look at the hacking group’s track record. However it is difficult to say as we tend to piece together motivation and action from a single point of reference and based on the facts known to us – much like archaeologists try to join the dots using fossils. One must also consider the nature of the attack – in this case redirecting sites results in a mildly irritating denial of service, which goes back to questioning the motivation. Based on current knowledge it appears that the group behind this attack have some political motivation. But what is the connection between this and global western brands? I’m not sure there is one. To me it appears that some attacks are motivated, while others are merely opportunistic (completing our attack triad) and at most provide a platform to further advertise a particular group’s message. So the motivation is rather to reach a desired audience (or audience size), and not to focus on a particular target. Once an attack vector has been discovered to be successful, the inherent nature of the Internet’s interconnectedness puts it at risk of repeated opportunistic attacks. As noted in the previous blog post there are many soft targets out there that struggle with fundamental controls, so it is only a matter of time before these targets are discovered and exposed through tried and tested successful attack vectors.

So what does this mean to the broader Internet community? Stating the obvious – you need good security in place if you have an online presence, regardless of your line of business as some attacks frankly don’t care and are driven by underlying infrastructure or services. Alternatively, a more lackadaisical way of looking at this is to suggest you only need to have marginally better security than your neighbour, as often the attack identifies the softest target. However, this can only ever provide a false sense of security given that some part of your online service offering relies on other providers that may not be as secure as you hope. My fear is that all the current hype about the cloud could mean organisations chose to rather transfer (and to a point accept) the risk. This approach will not be conducive to the Internet community as a whole determining the best way forward to collaboratively protect against future attacks.

News and another great quote – Breached (again?!)

If I kept a running commentary of all the system, service and data breaches currently being disclosed this blog would probably look like it was scrolling in real time. Thankfully a bunch of other sites do a great job of keeping us up to date on the somewhat gloomy happenings across the Internet.

The recent DNS attacks are of particular interest, and concern. DNS is part of the fabric of the Internet, and without it many people’s (click-and-mortar) businesses and livelihoods could come to an abrupt halt. In this case it was large corporations targeted, but it is easy to see smaller home-based companies suffering collateral damage.

It sometimes feels like we have built our Internet/E-commerce house on sand. What is more concerning is that the simple, well-known attacks (SQL injection in this case) are still highly effective. The DigiNotar incident audit report also puts fundamental security control failures at the root of the breach – log management, password controls, patches and network segmentation.

Why do we spend time worrying and analysing APTs and advanced cyber-crime techniques when we still can’t get the basics right?

Brian Honan summed it up well in his Editor comment on the SANS NewsBites email yesterday (Brian I hope you don’t mind me quoting you!):

“This (DNS) attack and the one on DigiNotar highlight how fragile, insecure and unsuitable the Internet is for conducting the type of transactions we are using it for.  Putting security solutions as add-ons to the infrastructure is not working.  We need a fundamental rebuild of the security architecture we are using and we need it now! ” (I like the irony of posting this link)