Reflections on Cybersecurity
Autor: | William A. Wulf, Anita K. Jones |
---|---|
Rok vydání: | 2009 |
Předmět: | |
Zdroj: | Science. 326:943-944 |
ISSN: | 1095-9203 0036-8075 |
DOI: | 10.1126/science.1181643 |
Popis: | > Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. > > –Antoine de Saint-Exupery in The Little Prince Cyberspace is less secure than it was 40 years ago. That is not to say that no progress has been made—cryptography is much better, for example. But more vital information is accessible on networked computers, and the consequences of intrusion can therefore be much higher. A fresh approach is needed if the situation is to improve materially. The prevailing assumption continues to be that if systems were implemented correctly, the problem would be solved. Yet, software engineers have tried to do that for 40 years and have failed. A 1993 report from the Naval Research Laboratory ([ 1 ][1]) points to a deeper problem. It analyzed some 50 security breaches, and found that in 22 of those cases, the code correctly implemented the specifications—it was the specifications that were wrong. They handled the usual cases just fine, but did not appreciate that under some circumstances, permitted actions or outcomes were, in fact, security breaches. A natural tendency is to declare a crisis and convene task forces and an army of programmers to “fix” the security problem(s). But, as detailed in Fred Brooks' The Mythical Man-Month ([ 2 ][2]), trying to get more “man months per calendar month” can actually make the situation worse, not better. We conjecture that a similar phenomenon is occurring for cybersecurity. The security model has remained the same since the 1960s, and software engineers have added more and more patches and widgets to try to enforce that security model. The complex interaction of this additional code with the extant code just provides more opportunities for security failures. The cybersecurity community must thus ask whether the problem has been formulated in the right way. ![Figure][3] CREDIT: JOE SUTLIFF/[WWW.CDAD.COM/JOE][4] The current model for most cybersecurity is “perimeter defense”: The “good stuff” is on the “inside,” the attacker is on the “outside,” and the job of the security system is to keep the attacker out. The perimeter defense model is built deeply into the very language used to discuss security: Hackers try to “break in,” “firewalls” protect the system, “intrusion” must be detected, etc. But is perimeter defense the right underlying model? We do not think so, for several reasons. First, perimeter defense does not protect against the compromised insider. The Federal Bureau of Investigation (FBI) has reported that in one sample of financial systems intrusions, attacks by insiders were twice as likely as ones from outsiders—and the cost of an intrusion by an insider was 30 times as great ([ 3 ][5]). Second, it is fragile; once the perimeter has been breached, the attacker has free access. Some will say that this is why “defense in depth” is needed—but if each layer is just another perimeter defense, all layers will have the same problems. Third, and most important, it has never worked. It did not work for ancient walled cities or for the French in World War II (at 20 to 25 km deep, the Maginot Line was the most formidable military defense ever built, yet France was overrun in 35 days). And it has not worked for cybersecurity. To our knowledge no one has ever built a secure, nontrivial computer system based on this model. So, what might be an alternative approach? We think we should take our cue from the Internet. That is, there should not be just one model. Rather, there should be a minimal central mechanism that enables implementation of many security policies in application code—systems attuned to the needs of differing applications and organizations. It is worth noting that the Internet succeeded so well precisely because it does so little. At its core, the TCP/IP protocols, all the Internet does is to promise “best effort” message delivery. It does not promise that the messages will arrive in the order in which they were sent, that they will ever arrive at all, or even that the same message will not arrive multiple times. All of the “smarts” of the net are at its periphery and embedded in “end-to-end” protocols ([ 4 ][6]) that are defined by applications. Dave Parnas, one of the early software engineers, made a provocative and, we think, deeply important observation that helps to explain the success of the TCP/IP protocols. He pointed out that, when doing a design, the hardest decision to change is the one you make first, because all the subsequent ones to some extent depend on it ([ 5 ][7]). The decision for the TCP/IP protocols to do so little never had to be reconsidered, because it precluded so little. Is there an analogy to the Internet message delivery design for security? Is there some minimal mechanism that would allow the construction of arbitrary end-to-end security protocols and allow an arbitrary number of these security protocols to coexist simultaneously? Is there a mechanism so simple that, while adequate to support the construction of security policies, does not preempt any decisions on the definition of security or how it is achieved? We think the answer is yes. But why build multiple “end-to-end security protocols” rather than one really good one? We offer three reasons. First, different applications have different security needs: The requirements of law enforcement emphasize the integrity of the trail of evidence, the intelligence community is most concerned with disclosure of sources and methods, legitimate access to electronic medical records may change dramatically in emergencies, and so on. The point is that desirable security policy is a natural extension of the application; there is no single security policy that serves all needs equally well. Second, multiple security protocols ensure that if one is broken, the others are not, or at least not in the same way. The current Internet clients form a predominantly Wintel/Cisco monoculture, so a single flaw can make almost the entire net vulnerable to the same attack. Incorporating multiple security policies and multiple implementations of the same policy can dramatically reduce this monoculture-induced vulnerability. Third, the requirements of future applications cannot be predicted. In the same way as user-defined, end-to-end communications protocols allowed new applications that were not anticipated (such as the Web, search engines, and e-commerce), application-defined security protocols could accommodate unanticipated security requirements. The lack of cybersecurity has been a consistent concern for 40 years. From time to time that concern flares up, and society resolves to “try harder,” but the number of intrusions and their cost have only increased exponentially. It is time to re-examine the basic assumptions, like perimeter defense. Systems based on those assumptions have consistently failed. At least one alternative is an Internet-like minimal mechanism that enables application-defined security definitions. Is such a minimal mechanism feasible? We think so. In particular, at the network level, an application can use any computable function to decide whether or not to provide its service to a client if it can be absolutely certain who is requesting it. There is a class of algorithms known as “cryptographic protocols” for doing this that require knowing the public key of an object—so we conjecture that by providing just a way of accessing the public key of an object, one could build an arbitrary end-to-end security policy. 1. [↵][8] “A Taxonomy of Computer Program Security Flaws, with Examples,” Naval Research Laboratory Report, NRL/FR/5542–93/9591, November 1993. 2. [↵][9] 1. F. Brooks , The Mythical Man-Month (Addison-Wesley, Reading, MA, 1975). 3. [↵][10] Testimony of Keith Lourdeau, Deputy Assistant Director, Cyber Division, FBI, before the Senate Judiciary Subcommittee on Terrorism, Technology, and Homeland Security February 2004. 4. [↵][11] 1. J. H. Saltzer 2. et al ., ACM Trans. Comput. Syst. 2, 277 (1984). [OpenUrl][12][CrossRef][13] 5. [↵][14] 1. D. L. Parnas , Commun. ACM 15, 1053 (1972). [OpenUrl][15][CrossRef][16] [1]: #ref-1 [2]: #ref-2 [3]: pending:yes [4]: http://WWW.CDAD.COM/JOE [5]: #ref-3 [6]: #ref-4 [7]: #ref-5 [8]: #xref-ref-1-1 "View reference 1 in text" [9]: #xref-ref-2-1 "View reference 2 in text" [10]: #xref-ref-3-1 "View reference 3 in text" [11]: #xref-ref-4-1 "View reference 4 in text" [12]: {openurl}?query=rft.jtitle%253DACM%2BTrans.%2BComput.%2BSyst.%26rft.volume%253D2%26rft.spage%253D277%26rft_id%253Dinfo%253Adoi%252F10.1145%252F357401.357402%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [13]: /lookup/external-ref?access_num=10.1145/357401.357402&link_type=DOI [14]: #xref-ref-5-1 "View reference 5 in text" [15]: {openurl}?query=rft.jtitle%253DCommun.%2BACM%26rft.volume%253D15%26rft.spage%253D1053%26rft_id%253Dinfo%253Adoi%252F10.1145%252F361598.361623%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [16]: /lookup/external-ref?access_num=10.1145/361598.361623&link_type=DOI |
Databáze: | OpenAIRE |
Externí odkaz: |