A long-held belief in security exists that views humans as the weakest link. The trend has been backed by social engineers demonstrating how easy it is to trick people into certain behaviours (i.e., giving away information or providing access), along with a quickly evolving sense of security awareness in industry. This sentiment has created an impression that large investments must be made in order to increase employee awareness of security threats. The hope, it seems, is that awareness training will lead to changes in behavior, which will result in a reduction in security incidents involving human interaction — positive or negative.
The human-is-the-weakest-link paradigm is an easy excuse to use when security incidents are growing in terms of both numbers and impact each year. This approach is positive when it leads to increased investments in security controls (both technical and organizational) and when it supports policies with technology and technology with knowledge. The approach is negative if used as an excuse for poor results or when it is based on a poor understanding of the human mind.
The human-as-a-firewall paradigm is a based on the idea that humans have the ability to recognise and handle new patterns better than computers, and thus are better suited to taking care of (some) security decisions. Mainly, this paradigm is driven as a counter-discourse to the human-as-the-weakest-link concept, providing an opposing view that results in different actions.
Incorporate balanced programs
Where the first paradigm’s disregard of human abilities looks down on employees, thus enabling and strengthening potential hostility between the employees and those in charge of security, the second paradigm may put too much focus on human abilities. Research into the human mind strongly suggests that we are predictably irrational (Ariely, 2008) when it comes to decision making and that context matters more than we like to accept.
The human mind is more complex than these two paradigms suggests. Humans can be both weak links and firewalls — context, social setting, training, time and many other factors are at play. Building and improving a security culture requires more than just awareness training programmes. Programmes must engage employees and their colleagues in dialogue, curiosity and responsibility. Programmes must start and end with the employee, based on the risk profile of the organisation, the employees position, the employee’s access to information, and security culture score.
Our results show that organisations that use organisational and technical controls, in combination with an understanding of their organisational needs, are generating better results than those organisations that fail to incorporate balanced programs.
Measuring security is a given when discussing technical controls. Just like technical controls, organisational controls like security culture, must be measured in order to understand and manage change. Organisational sociology researchers have for decades observed that a one-size-fits-all approach is ineffective.
If organisations want to elicit change (e.g., by increasing the security level of the organization) greater focus should be placed on the main determinants of change, which are contextually specific and must be empirically verified. The CLTRe Toolkit empirically detects and verifies the mechanisms of change, and guarantees that investing in certain interventions will produce desired outcomes.
Combining state-of-the art web survey research, predictive analytics and business intelligence results in analytical procedures that allow monitoring of complex internal organisational processes, and the information gathered using these approaches can be used to aid decision makers to efficiently improve organisational processes and drive desired change.
Measuring security culture is done by applying socio-informatic principles and methods. The CLTRe Toolkit was created based on sound scientific research combined with security domain expertise. Measuring security culture gives guidance to improve organisational security (AlHogail, 2015). The essential logic of this approach is that if we want to change the reality, we first need to have valid information about the true state of the socio-technical reality. To measure reality in a precise way, a rigorous scientific approach needs to be undertaken, one that results in sophisticated metrics. When such metrics are put into practice, the true state of security culture can be assessed and effective decisions can be made on the information generated.
First published by CLTRe in the Security Culture Report 2017: In-depth insights into the Human Factors (pp. 36-7, 54-55), May 2017.
The Security Culture Report 2017: In-depth insights into the Human Factors is available to download.
Get the full report and read online: https://get.clt.re/security-culture-report-2017 or, if you prefer a printed copy, it can be ordered from: https://www.amazon.com/Indepth-insights-into-human-factor/dp/1544933940