Trust, in Security

Definition of Trust: “firm belief in the reliability, truth, or ability of someone or something”

Synonyms: “belief, confidence, faith, certainty”

When it comes to information security and risk management, trust is often something easily disregarded – “trust is not a control”, “in God we trust, everyone else we audit”. This is because it is easier to work with hard and fast rules – binary options which have been the mainstay of enterprise security since enterprise security was a thing. To this day we still rely heavily on this approach, e.g.:

  • Authentication (are you who you say you are – yes or no?),
  • Firewalls (block or allow),
  • Anti-virus (does this string match a known malicious signature – yes or no?), and
  • Authorisation (does your role permit you to access this – yes or no?).

The problem is that humans don’t think or operate like this. We rely instead on instinct, gut feel and other equally un-binary mechanisms to make decisions. In the past this wasn’t really an issue, as information security was largely in the realm of IT and infrastructure, and so most humans didn’t notice this. However, today’s security is very visible to staff, third parties and clients, and therefore the controls we implement have a direct impact on them. And as humans we come with certain expectations about flexibility and trust (in ourselves).

 

Trust is a fundamental component of our innate security and risk management models. The algorithms we as humans execute to survive – through our senses, feelings and emotions – consider risk, and by extension trust, at the core. In other words, daily activities subconsciously execute algorithms to determine and improve the probability of survival. In doing so we are constantly evaluating levels of trust and making decisions accordingly based on both internal and external factors. These decisions are diverse, ranging from driving safely by assessing the trustworthiness of the road or your fellow drivers, to human interactions and specifically relationships which are fundamentally built on trust. The definition of trust uses words like ‘belief’, ‘reliability’ and ‘ability’ which highlight this is not a binary consideration. Trust is not ‘on’ or ‘off’, nor is it static. Rather, it is measured on an ongoing basis to various levels. It changes over time based on feedback.

 

Information security systems are already moving into this realm, for example:

  • Modern anti-malware uses behavioural analysis to detect suspicious software processes and activity rather than relying on signature-based detection,
  • User behavioural monitoring solutions detect and alert on suspicious behaviour such as data theft,
  • Interactive authentication solutions take into account how User Interfaces (e.g. keyboard) are used as an additional attribute to authenticate users, and
  • Confidence levels are applied to data classification and threat intelligence feeds that associate a level of trust vs. pure binary decisions.

By embracing these type of solutions, the concept of trust is put at the core of the security model. This approach is partially related to the concept (coined by Forrester) known as ‘zero trust security’, where the security model begins with no trust and then associates trust (and authorisation) based on feedback received from querying certain attributes. Outside of information security, this is actually how most things work – everything starts with zero trust (because as per the definition we can’t have any belief or confidence in something we don’t know anything about), but we very quickly build up a level of trust based on feedback. This process can be so quick that we don’t even realise we started at that point.

 

So the saying “trust is not a control” may be true, but that almost feels like it’s an answer to the wrong question. Trust is the fundamental part of control and can be heavily exploited using contextual and behavioural analysis to both improve user experience and increase security effectiveness. For many years we’ve approached security as a balance between security and usability – we need to think of these things together as one part of the challenge and focus efforts on extending our ability to mimic the millennia of evolutionary refinements in human algorithms to make our information security systems more effective. By adopting this approach we can execute control both more granularly (through feedback mechanisms that constantly monitor and adjust levels of trust) and in a structure that is less hierarchical and more aligned to living systems (e.g. not top down but relational and ad-hoc).

On the human side, Sumantra Ghoshal supports this idea in his speech “The smell of the place” (a very relevant and must-watch video if you are interested in organisational leadership, despite being over 15 years old) whereby one of the four comparisons he made between good and bad (modern and dated) organisational practices was “Contract vs. Trust”. Traditionally, organisations contract with employees to define roles, objectives and ultimately performance. On the other hand, trusting employees (in conjunction with support) is a more effective way of reaching better performance. This supports the idea that reasons not rules make us effective. This is a bit of a loose correlation, but it feels that the binary approach to people management in the workplace is also falling away, and the expectation of being able to trust people is increasing.

With regard to information security, how can we trust individuals to be responsible and do the right thing? By designing systems that allow responsible people to see less controls when operating with higher trust levels (authentication, information handling, location, behaviour, time and device security posture as some examples of attributes that can be considered). Further, these systems need to facilitate a constant feedback loop to cater for changes in trust as characteristics of attributes change, adjusting security accordingly. We are already well on our way in this journey and must guard against defaulting to the easy ‘yes/no’ option when posed with new security and business challenges.

In closing, nothing I’ve mentioned here is new or revolutionary as many systems already take advantage of Machine Learning and Artificial Intelligence capabilities to become ‘more human’, often referred to as the age of cognitive computing. I wrote this post to summarise some thoughts and test whether these assumptions hold true when written down.

Leave a comment