Trust, in Security

Definition of Trust: “firm belief in the reliability, truth, or ability of someone or something”

Synonyms: “belief, confidence, faith, certainty”

When it comes to information security and risk management, trust is often something easily disregarded – “trust is not a control”, “in God we trust, everyone else we audit”. This is because it is easier to work with hard and fast rules – binary options which have been the mainstay of enterprise security since enterprise security was a thing. To this day we still rely heavily on this approach, e.g.:

  • Authentication (are you who you say you are – yes or no?),
  • Firewalls (block or allow),
  • Anti-virus (does this string match a known malicious signature – yes or no?), and
  • Authorisation (does your role permit you to access this – yes or no?).

The problem is that humans don’t think or operate like this. We rely instead on instinct, gut feel and other equally un-binary mechanisms to make decisions. In the past this wasn’t really an issue, as information security was largely in the realm of IT and infrastructure, and so most humans didn’t notice this. However, today’s security is very visible to staff, third parties and clients, and therefore the controls we implement have a direct impact on them. And as humans we come with certain expectations about flexibility and trust (in ourselves).

 

Trust is a fundamental component of our innate security and risk management models. The algorithms we as humans execute to survive – through our senses, feelings and emotions – consider risk, and by extension trust, at the core. In other words, daily activities subconsciously execute algorithms to determine and improve the probability of survival. In doing so we are constantly evaluating levels of trust and making decisions accordingly based on both internal and external factors. These decisions are diverse, ranging from driving safely by assessing the trustworthiness of the road or your fellow drivers, to human interactions and specifically relationships which are fundamentally built on trust. The definition of trust uses words like ‘belief’, ‘reliability’ and ‘ability’ which highlight this is not a binary consideration. Trust is not ‘on’ or ‘off’, nor is it static. Rather, it is measured on an ongoing basis to various levels. It changes over time based on feedback.

 

Information security systems are already moving into this realm, for example:

  • Modern anti-malware uses behavioural analysis to detect suspicious software processes and activity rather than relying on signature-based detection,
  • User behavioural monitoring solutions detect and alert on suspicious behaviour such as data theft,
  • Interactive authentication solutions take into account how User Interfaces (e.g. keyboard) are used as an additional attribute to authenticate users, and
  • Confidence levels are applied to data classification and threat intelligence feeds that associate a level of trust vs. pure binary decisions.

By embracing these type of solutions, the concept of trust is put at the core of the security model. This approach is partially related to the concept (coined by Forrester) known as ‘zero trust security’, where the security model begins with no trust and then associates trust (and authorisation) based on feedback received from querying certain attributes. Outside of information security, this is actually how most things work – everything starts with zero trust (because as per the definition we can’t have any belief or confidence in something we don’t know anything about), but we very quickly build up a level of trust based on feedback. This process can be so quick that we don’t even realise we started at that point.

 

So the saying “trust is not a control” may be true, but that almost feels like it’s an answer to the wrong question. Trust is the fundamental part of control and can be heavily exploited using contextual and behavioural analysis to both improve user experience and increase security effectiveness. For many years we’ve approached security as a balance between security and usability – we need to think of these things together as one part of the challenge and focus efforts on extending our ability to mimic the millennia of evolutionary refinements in human algorithms to make our information security systems more effective. By adopting this approach we can execute control both more granularly (through feedback mechanisms that constantly monitor and adjust levels of trust) and in a structure that is less hierarchical and more aligned to living systems (e.g. not top down but relational and ad-hoc).

On the human side, Sumantra Ghoshal supports this idea in his speech “The smell of the place” (a very relevant and must-watch video if you are interested in organisational leadership, despite being over 15 years old) whereby one of the four comparisons he made between good and bad (modern and dated) organisational practices was “Contract vs. Trust”. Traditionally, organisations contract with employees to define roles, objectives and ultimately performance. On the other hand, trusting employees (in conjunction with support) is a more effective way of reaching better performance. This supports the idea that reasons not rules make us effective. This is a bit of a loose correlation, but it feels that the binary approach to people management in the workplace is also falling away, and the expectation of being able to trust people is increasing.

With regard to information security, how can we trust individuals to be responsible and do the right thing? By designing systems that allow responsible people to see less controls when operating with higher trust levels (authentication, information handling, location, behaviour, time and device security posture as some examples of attributes that can be considered). Further, these systems need to facilitate a constant feedback loop to cater for changes in trust as characteristics of attributes change, adjusting security accordingly. We are already well on our way in this journey and must guard against defaulting to the easy ‘yes/no’ option when posed with new security and business challenges.

In closing, nothing I’ve mentioned here is new or revolutionary as many systems already take advantage of Machine Learning and Artificial Intelligence capabilities to become ‘more human’, often referred to as the age of cognitive computing. I wrote this post to summarise some thoughts and test whether these assumptions hold true when written down.

Advertisements

That was a long hiatus

6 years since my last post. My eldest son is 6. This is not a coincidence…

Should Technology be at the Heart of IT?

When working with senior IT management a while back an example served to highlight different approaches to managing the IT function, or at least sub-components of it. What appeared obvious is the influence of the background and experience of management as people tend to play to their strengths. The example was a discussion around a new position that had been created in the IT governance team relating to process and ITIL.
 
What struck me about the approach was that the technology was already embedded, and the person was being recruited to manage the technology and build process around it. In other words technology was at the core of the business objective with administrative and procedural components being mapped around it. Is this the right way to approach IT management? The obvious risk is that this may render the situation at the mercy of the given technology, and should the technology be replaced within the organisation this will require a refocus on skills and possibly infrastructure or other components radiating outwards. Documentation and other sound governance elements will also require a revisit.
I don’t believe this model to be entirely wrong – there may be instances where building a function around a technology are advantageous, especially when core business objectives revolve around the technology.
 
From a maturity, reuse and risk management perspective I believe the process should be at the core of the function with technology acting as an enabler, especially given the current rate of change. I mention risk because in the Information Security space this is analogous to the development of an InfoSec Policy, which starts at a sufficiently high level to be technology agnostic and then to plug in current and future technology at lower-level standards and sub-policies. If technology comes before process then the latter constantly has to be updated as the technology matures.
 
So, is the decision to place technology or process at the centre due to the skills/experience of management driven by those more in touch with the technology (or those who shout the loudest), and is it destined to be tactical rather than strategic?