Does information security still make sense at all or is it enough to comply with the legal requirements? What is behind the concept of ISMS?
We look behind the facades and into the reasons why companies are so reluctant to approach security concepts pragmatically and why ROSI (Return On Security Investment) is still being asked for.
The security landscape in Europe is characterized by regulations, laws and standards that are currently being rolled out, while the actual background of information security is motivated by economic factors. In their writings, Ross Anderson and Tyler Moore, along with other co-authors, illustrate how information security is driven by economics, what problems result, and what recommendations the experts can come up with – An insight into the driving force of information security.
To understand whether implementing information security makes sense for organizations, it is first necessary to understand what security is needed for and from whom. In this regard, there are still far-reaching challenges to be overcome in business if safety is to be the focus of services.
So far, according to Anderson and Moore, there is a lack of incentive for companies to safely develop and offer their services, due to a lack of insight into the benefits and drawbacks of the services. This fact becomes clearer when we take a closer look at the perspectives of the stakeholders: Service providers and buyers have different information about the state of security of the offered product. This information asymmetry is one of the major challenges in securing services and systems. Since security offers little foresight, and secure services are little different in form from insecure ones, there is little need for costly secure systems as long as cheaper services are offered¹. From the buyer’s perspective, there is no incentive to purchase a secure service from the outset. As a result, the incentive to invest in secure services is also diminishing from the seller’s perspective. In the U.S., this issue of visibility of security problems and resulting incidents is addressed by requiring by law that a company experiencing a security incident report that incident to a competent authority². At least to some extent, this reveals the advantage or disadvantage of uncertain services. Since then, there have been similar approaches in this direction in Europe. The effect of this reporting requirement is to artificially create a shift from mandatory measures to an incentive for companies to take security measures if they wish to avoid publication of their grievances³.
However, it remains unclear whether and to what extent information asymmetry and a lack of incentives to implement information security can be counteracted. External regulations have only limited influence on the external presentation of secure services, and as long as they are not remunerated according to the extended costs of development or implementation, companies are left to decide whether security is ultimately profitable for the service provider.
The consequences of security incidents are often not directly perceived by the person responsible. For example, someone who connects an insecure device in their corporate network may not necessarily feel the consequences of an attack that they themselves enabled in the first place. Here, the question remains open as to who should be responsible for preventing these incidents. It makes sense to distribute this responsibility for avoiding risks, such as the use of unsafe equipment, in such a way that not only the person who suffers the consequences is accountable, but also the person who can best handle or contain the relevant risk⁴. At best, these would be one and the same person. Unfortunately, this advantageous distribution of responsibilities is not always the case. The incentive to reduce risk often lies not with the person who has to suffer the consequences of an incident, but at worst with the person who has no insight into how to deal with the risk. A classic example would be indemnity insurance where the insured acts recklessly, thus increasing the chance of an insured event⁵.
A laissez-faire principle of delegating responsibility and creating incentives may not be sufficient to enforce an adequate level of safety or even extensive implementation of effective safety measures, despite all efforts. The costs of implementing sufficient security to a large extent are still too low compared to the losses for companies. One way of enforcement would be fixed penalties⁶. This is the approach the EU is taking, at least in the area of data protection, with the EU General Data Protection Regulation, which came into force in 2018. Cause for an EU-wide personal data protection law may resonate with individuals, but is still very far from a holistic solution to the security problem. Furthermore, it does not solve the problem of an efficient solution for implementation.
This phenomenon goes further, however, as the incentives of individuals may diverge. The more groups are similar, and I assume that this principle also applies to infrastructures of a technical nature, the more vulnerable they would be to attacks⁷. This phenomenon would have further political consequences⁸. The authors refer in this case to the situation of welfare communities that work to create, through solidarity, more social security. Applied to the field of security, it remains to be decided whether a division of security zones with different preferences in an organization will result in a certain additional effort on the one hand, but on the other hand make attacks more difficult due to the different characteristics.
The lack of incentive to implement security measures presents some hurdles, but it is not the only problem addressed by the authors to implement systems and services more securely. Alluding to the question of responsibility for implementing risk avoidance measures, external factors remain in the foreground. The consequences of external influences, in addition to the lack of incentive, make it difficult to implement safety measures. In many cases, a certain number of systems or endpoints must implement a measure in order to add value⁹. This circumstance often prevents the implementation of measures that should be implemented by all parties at the same time. In particular, I would like to point out here that this problem has since been addressed and solutions exist. One example is the NIST SP 800 series of standards, i.e. specifically the NIST Risk Management Framework, which takes this as its starting point and incorporates security into the development cycle of systems. Not only is safe development improved as a result, but there is also a continuous adaptation of the measures to be implemented to the individual system in question. This series of standards has found its way into U.S. legislation through FISMA and appears to be moving in the same direction of implementing measures in a system-specific manner rather than across systems. Other approaches also exist in Europe, which continue to be developed and are aligned with internationally established standards.
To address the challenge, at least for service providers offering applications, a law could come to light that only secure goods would be allowed to be offered. Manufacturers, and ultimately developers, should be held responsible when unsafe apps are purchased and used by users¹⁰. The authors base their assumption on the fact that security in the product is not apparent to the end user¹¹ and even if it were clearly visible, the buyer would still weigh security against the extra cost in the product. Questions about what characteristics a “safe product” must fulfill and to what extent patching falls into this must be clarified.
Overall, there seems to be little efficient way to push on companies from the outside to implement security more comprehensively. Adelman had shown that even certificates, which are supposed to assure safety, cannot be regarded as a sign of quality¹². The reason for this was that companies that offered insecure websites in particular sought these certificates in order to divert attention from the insecure circumstance. Anderson and Moore show that regardless of the state of the art, several factors suggest that information security is an unattractive topic for many companies and can only be vaguely enforced through regulations. It remains to be seen whether economic conditions will continue to make it difficult to enforce safety requirements as long as risks are not made visible and responsibilities are not clearly shown. We see in many industries today that security is implemented at a minimum and primarily for the reason of compliance with laws. It remains an exciting question whether and among whom there will be a cultural change in the future to view security as a good that justifies a certain premium.
A contribution by Alexander Kühl.
 EIS, page 5; from “The Economics of Information Security,” by Ross Anderson and Tyler Moore, University of Cambridge, UK, published 2006. in “Science”; here EIS for short
 SEEP, page 5; from “Security Economics and European Policy,” by Ross Anderson, Rainer Böhme, Richard Clayton and Tyler Moore. Computer Laboratory, University of Cambridge, UK, Technische Universität Dresden, DE, 2008, published. in ISSE 2008 Securing Electronic Business Processes; here abbreviated SEEP
 SEEP, page 5
 EIS, P. 2
 EIs, p. 2
 SEEP, P. 10
 SEEP, P. 3
 SEEP, P. 3
 EIS, P. 4
 SEEP, P. 13
 EIS, P. 5
 B. Edelman, Proceedings of the Fifth Workshop on the Economics of Information Security (2006).