@@ -39,23 +39,24 @@ An equally important requirement in many cases is
3939was sent by the entity that claimed to have sent it. In the example of
4040e-commerce, this is what allows us to know we are connected to, say,
4141the website of the vendor we wish to patronize and not handing over
42- our credit card to some imposter .
42+ our credit card to some impostor .
4343
44- Closely related to authentication is *integrity *. It is not only important that
44+ Closely related to authentication is *integrity *. It is important not only that
4545we know who we are talking to, but that we can verify
4646that the data sent across our connection has not been modified by some
4747adversary in transit.
4848
49- The preceding requirements also suggest a fourth: *identity *. That is,
50- we need a system by which the entities involved in communication,
51- often called *principals *, can be securely identified. As we will
52- discuss later,
53- this problem is harder to solve than it might first appear. How can we
54- know that a website we are communicating with actually represents the
55- business with whom we wish to communicate?
49+ The preceding requirements also suggest that we must have a concept of
50+ *identity *. That is, we need a system by which the entities involved
51+ in communication, often called *principals *, can be securely
52+ identified. As we will discuss later, this problem is harder to solve
53+ than it might first appear. How can we know that a website we are
54+ communicating with actually represents the business with whom we wish
55+ to communicate? Or how does a banking system know that the person
56+ behind a particular request is actually the account holder?
5657
5758Just as we are concerned that an adversary might access our data in
58- transit to eavesdrop on or modifiy it, we also need to be concened
59+ transit to eavesdrop on or modify it, we also need to be concerned
5960about *replay attacks * in which data is captured and then
6061retransmitted at some later time. For example, we would want to
6162protect against an attack in which an item added to a shopping cart
@@ -73,7 +74,7 @@ them can be protected against *denial-of-service* (DoS) attacks. The
7374Morris Worm was an early example of an unintentional DoS attack: as
7475the worm spread to more and more computers, and reinfected computers
7576on which it was already present, the resources consumed by the worm
76- rendered those computers unable to function. Networks provide a meands
77+ rendered those computers unable to function. Networks provide a means
7778by which data can be amplified by replication, allowing large volumes
7879of traffic to be sent to the target of a DoS attack; thus it has
7980become necessary to develop means to mitigate such attacks.
@@ -88,25 +89,155 @@ system remains secure. We will outline some of the most well-known
8889principles here and the following chapters contain examples of how
8990those principles have been applied in practice.
9091
92+ 2.2.1 Defense in Depth
93+ ~~~~~~~~~~~~~~~~~~~~~~
94+ As we have noted, one of the central challenges in security is that we
95+ never know if we have done enough. Much as we try to defend against
96+ all possible attacks, there is no way to be sure that we've thought of
97+ everything. This is what we mean by saying that security is a negative
98+ goal: we aim to be sure that a set of things cannot happen, but we can
99+ never quite be sure that all vulnerabilities have been found and
100+ mitigated. This leads to the idea of *defense in depth *: layer upon
101+ layer of defense, so that even if one layer is penetrated, the next
102+ layer is unlikely to be. Only by getting through all the layers of
103+ defense will an attacker be able to achieve their goal (of stealing
104+ our data, for example). The hope is that with enough layers of
105+ defense, the odds of an attacker penetrating all of them becomes
106+ vanishingly small.
107+
108+ As a simple example, a corporation might make use of a VPN (virtual
109+ private network) to ensure that only authorized users can access
110+ corporate servers, and that when they do so over the Internet, their
111+ traffic is encrypted. However, this single layer of defense is prone
112+ to several forms of attack, such as the presence of malware on the
113+ remote user's computer, or compromise of the remote user's
114+ credentials. Thus, additional layers of defense are needed, such as
115+ internal firewalls between different corporate systems; tools to
116+ detect, remediate, and prevent malware on the remote users' systems;
117+ and multi-factor authentication to protect against compromised user
118+ credentials. This is just a short list of defensive measures that are
119+ commonly used, and would not on their own be considered
120+ sufficient. But the point to note here is the use of many overlapping
121+ layers of defense to raise the bar high enough to thwart the majority
122+ of attacks.
123+
124+ The fact that we read about breaches in which attackers succeed in
125+ gaining access to corporate systems and data on a regular basis might
126+ suggest that the battle is being lost. Certainly the challenges in
127+ defeating determined attackers are substantial. However, it is surprising how
128+ frequently it turns out that a well-publicized attack has succeeded
129+ because some relatively common defensive measure, such as multi-factor
130+ authentication, was not put in place correctly.
131+
132+ 2.2.2 Principle of Least Privilege
133+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134+ The principle of least privilege has a long history in computer
135+ science, having been proposed by Saltzer and Schroeder in 1975. The
136+ principle states:
137+
138+ "Every program and every user of the system should operate using the
139+ least set of privileges necessary to complete the job."
140+
141+ A common example of this principle in practice is to avoid running
142+ anything as root on Unix-like systems unless absolutely necessary.
143+
144+ In the context of networking, this principle implies that applications
145+ which access the network should only have access to the set of
146+ resources needed to do their jobs.
147+
148+ .. feel like there is more detail to provide here.
149+
150+ Interestingly, Saltzer and Schroeder explicitly mention "firewalls" in
151+ the section of their paper on least privilege, using the analogy from
152+ the physical world (a wall to prevent the spread of fire) before the
153+ concept of network firewalls had been invented. As we will discuss
154+ later, it turns out that the widespread use of network firewalls for
155+ most of their history *failed * to follow the principle of least
156+ privilege, in that it is common to find large "zones" of a network
157+ where all machines have access to each other, even though this access
158+ is not actually required for the machines to do their jobs. Addressing
159+ this shortcoming required some innovations in the design of firewalls
160+ that arrived only in the last decade or so.
161+
162+ 2.2.3 Open Design
163+ ~~~~~~~~~~~~~~~~~
164+
165+ Another principle codified by Saltzer and Schroeder is that of open
166+ design. It states that the mechanisms and
167+ algorithms that are used to implement security should be open, not
168+ secret. The idea is that rather than trying to keep something as large
169+ and complex as an encryption algorithm secret, it is better for that
170+ algorithm to be published and only the key(s) be secret. There are two
171+ reasons for this principle:
172+ * It is hard to keep an algorithm secret, especially if it is in
173+ widespread use as is the case with encryption on the Internet;
174+ * Making security mechanisms robust against all forms of attack is, as
175+ we have discussed, difficult. Thus it is better to have wide
176+ scrutiny of these mechanisms to expose weaknesses that may then be
177+ rectified.
178+
179+ The history of computer security is filled with cautionary tales
180+ related to this principle. In the cases where the principle is
181+ followed, subtle bugs in protocol design or implementation have been
182+ exposed and patches rolled out to mitigate them. Heartbleed, a bug in
183+ the widely used open source implementation of SSL, is a famous
184+ example. The consequences of the bug were serious, with as many as
185+ half a million Web servers being impacted, but it was a positive thing
186+ that the bug was found, reported, and remediated quickly.
187+
188+ If this principle is not followed, a design that is believed to be
189+ secret may in fact have been compromised (e.g. by reverse
190+ engineering), or may have flaws that have gone unreported but are
191+ nevertheless being exploited.
192+
193+ Another way to state this principle is "minimize secrets". For
194+ example, rather than trying to keep an entire algorithm secret, only
195+ keep secret the key that is used to decrypt with the algorithm. It is
196+ much easier to replace a key that has been compromised than to replace
197+ an entire algorithm.
198+
199+ 2.2.4 Fail-safe defaults
200+ ~~~~~~~~~~~~~~~~~~~~~~~~
201+
202+ The idea behind this principle is the default settings of a system are
203+ the ones most likely to be used, so by default, undesired access
204+ should be disabled. It then takes an explicit action to enable
205+ access. This is a principle that dates back at least to 1965 according to
206+ the Saltzer and Schroeder paper.
207+
208+ It turns out that the design of the Internet really doesn't follow
209+ this approach. The datagram delivery model of the Internet, by
210+ default, allows packets from anywhere to be sent anywhere. So to the
211+ extent that sending a packet to a system can be defined as accessing
212+ the system, the Internet's default behavior does not provide fail-safe
213+ defaults. Efforts to revert to a more secure default behavior include
214+ such old ideas as network firewalls and virtual private networks,
215+ along with more modern approaches such as microsegmentation and
216+ zero-trust architectures. We will discuss these developments in a later chapter.
217+
218+
219+
220+ .. admonition :: Further Reading
221+
222+ Jerome Saltzer and Michael Schroeder. `The Protection of Information
223+ in Computer Systems
224+ <http://web.mit.edu/Saltzer/www/publications/protection/index.html> `__. In
225+ Proceedings of the IEEE, 1975.
226+
91227
92- .. working list of ideas
93-
94- Least Privilege
95228
96- Defense in Depth/safety net
229+ .. working list of ideas
230+ safety net
97231
98232Be Explicit
99233
100234Design for Iteration
101235
102236Audit
103237
104- Open Design
105238
106- Minimize secrets
107239
108240economy of mechanism
109241
110- fail-safe defaults
111242
112243
0 commit comments