Does information security still make sense or is it enough to comply with the legal requirements? What is behind the ISMS concept?
We look behind the facades and look at the reasons why companies find it so difficult to approach security concepts pragmatically and why ROSI (Return On Security Investment) is still being asked for.
The security landscape in Europe is characterized by regulations, laws and standards that are currently being rolled out, while the actual background of information security is motivated by economic factors. In their writings, Ross Anderson and Tyler Moore and other co-authors illustrate how information security is driven by economics, what problems result from this and what recommendations the experts can come up with – an insight into the driving force behind information security.
In order to understand whether it makes sense for companies to implement information security, it is first necessary to understand why security is needed and who is responsible for it. There are still far-reaching challenges in the economy that need to be overcome if security is to be the focus of services.
So far, according to Anderson and Moore, there has been a lack of incentive for companies to develop and offer their services safely, which is due to a lack of insight into the advantages and disadvantages of the services. This becomes clearer when we take a closer look at the perspectives of those involved: Service providers and buyers have different information about the state of safety of the product on offer. This information asymmetry is one of the major challenges in securing services and systems. Since security offers little foresight and secure services hardly differ from insecure services in their form, there is little need for cost-intensive, secure systems as long as cheaper services are offered¹. From the buyer’s perspective, there is no incentive to purchase a secure service from the outset. As a result, the incentive to invest in secure services also diminishes from the seller’s perspective. In the USA, this problem of the visibility of security problems and the resulting incidents is addressed by law by requiring a company that has suffered a security incident to report this incident to a competent authority². This makes the advantages and disadvantages of unsafe services visible, at least to some extent. Similar approaches in this direction have since been adopted in Europe. This reporting obligation has the effect of artificially creating a shift from mandatory measures to an incentive for companies to take security measures if they want to avoid having their grievances published³.
However, it remains unclear whether and to what extent information asymmetry and a lack of incentives to implement information security can be counteracted. External regulations only have a limited influence on the external presentation of secure services and as long as these are not remunerated in line with the increased costs of development or implementation, it is up to the companies to decide whether security is ultimately profitable for the service provider.
The consequences of security incidents are often not directly perceived by the person responsible. For example, someone who connects an insecure device to their company network does not necessarily feel the consequences of an attack that they themselves have made possible. The question remains as to who should be responsible for preventing these incidents. It makes sense to distribute this responsibility for avoiding risks, such as the use of insecure devices, in such a way that not only the person who suffers the consequences is accountable, but also the person who can best deal with or contain the corresponding risk⁴. At best, this would be one and the same person. Unfortunately, this advantageous distribution of responsibilities is not always the case. The incentive to reduce the risk often does not lie with the person who has to suffer the consequences of an incident, but at worst with the person who has no insight into how to deal with the risk. A classic example would be indemnity insurance where the insured person acts recklessly and thus increases the chance of an insured event⁵.
Despite all efforts, a laissez-faire principle of delegating responsibility and creating incentives may not be sufficient to enforce an appropriate level of security or even extensive implementation of effective security measures. The costs for extensive implementation of adequate security are still too low compared to the losses for companies. One possibility for enforcement would be fixed penalties⁶. The EU is taking this approach, at least in the area of data protection, with the EU General Data Protection Regulation, which came into force in 2018. The reason for an EU-wide law on the protection of personal data may appeal to individuals, but it is still a long way from a holistic solution to the security problem. Furthermore, it does not solve the problem of an efficient solution for implementation.
However, this phenomenon goes even further, as the incentives of individuals may diverge. The more groups are similar, and I assume that this principle also applies to technical infrastructures, the more vulnerable they would be to attacks⁷. This phenomenon would have further political consequences⁸. In this case, the authors refer to the situation of welfare communities that are committed to creating more social security through solidarity. Applied to the field of security, it remains to be decided whether a division of security zones with different preferences in an organization on the one hand entails a certain additional effort, but on the other hand makes attacks more difficult due to the different characteristics.
The lack of incentive to implement security measures poses a number of hurdles, but is not the only problem addressed by the authors in making systems and services more secure. With reference to the question of responsibility for the implementation of risk prevention measures, external factors remain in the foreground. In addition to the lack of incentive, the consequences of external influences make it more difficult to implement security measures. In many cases, a certain number of systems or endpoints must implement a measure in order for added value to be created⁹. This circumstance often prevents the implementation of measures that should be implemented by all parties at the same time. In particular, I would like to point out that this problem has since been addressed and solutions exist. One example is the NIST SP 800 series of standards, i.e. specifically the NIST Risk Management Framework, which addresses this issue and incorporates security into the development cycle of systems. Not only does this improve secure development, but the measures to be implemented are also continuously adapted to the individual system in question. This series of standards has found its way into US legislation through FISMA and appears to be moving in the same direction of implementing measures on a system-specific rather than a system-wide basis. There are also other approaches in Europe that are still under development and are based on internationally established standards.
To address the challenge, at least for service providers who offer applications, a law could come to light that only secure goods may be offered. The manufacturers and ultimately the developers should be held responsible if insecure applications are purchased and used by users¹⁰. The authors base their assumption on the fact that security in the product is not apparent to the end user¹¹ and even if it were clearly visible, the buyer would still weigh the security against the extra cost of the product. Questions about what features a “secure product” must fulfill and to what extent patching falls into this category need to be clarified.
Overall, there seems to be hardly any efficient way of exerting external pressure on companies to implement security more comprehensively. Adelman had shown that even certificates that are supposed to assure security cannot be seen as a sign of quality¹². The reason for this was that companies that offered insecure websites in particular sought to obtain these certificates in order to distract attention from the insecure situation. Anderson and Moore show that, regardless of the state of the art, there are a number of factors that indicate that information security is an unattractive topic for many companies and can only be vaguely enforced by regulations. It remains to be seen whether economic conditions will continue to make it difficult to enforce security requirements as long as risks are not made visible and responsibilities are not clearly identified. In many industries today, we see that security is implemented at a minimum and primarily for the sake of compliance with the law. The exciting question remains as to whether and for whom there will be a cultural change in the future to view security as a good that justifies a certain premium.
A contribution by Alexander Kühl.
[1] EIS, page 5; from “The Economics of Information Security”, by Ross Anderson and Tyler Moore, University of Cambridge, UK, 2006, published in “Science”; here EIS for short
[2] SEEP, page 5; from “Security Economics and European Policy”, by Ross Anderson, Rainer Böhme, Richard Clayton and Tyler Moore. Computer Laboratory, University of Cambridge, UK, Technische Universität Dresden, DE, 2008, published in ISSE 2008 Securing Electronic Business Processes; here short SEEP
[3] SEEP, page 5
[4] EIS, p. 2
[5] EIs, p. 2
[6] SEEP, p. 10
[7] SEEP, p. 3
[8] SEEP, p. 3
[9] EIS, p. 4
[10] SEEP, p. 13
[11] EIS, p. 5
[12] B. Edelman, Proceedings of the Fifth Workshop on the Economics of Information Security (2006).