@@ -16,8 +16,8 @@ description: >-
16
16
17
17
<!-- overview -->
18
18
19
- If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you
20
- might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
19
+ If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
20
+ then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
21
21
NetworkPolicies are an application-centric construct which allow you to specify how a {{<
22
22
glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
23
23
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
@@ -257,21 +257,23 @@ creating the following NetworkPolicy in that namespace.
257
257
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
258
258
ingress or egress traffic.
259
259
260
- ## SCTP support
260
+ ## Network traffic filtering
261
261
262
- {{< feature-state for_k8s_version="v1.20" state="stable" >}}
263
-
264
- As a stable feature, this is enabled by default. To disable SCTP at a cluster level, you (or your
265
- cluster administrator) will need to disable the ` SCTPSupport `
266
- [ feature gate] ( /docs/reference/command-line-tools-reference/feature-gates/ )
267
- for the API server with ` --feature-gates=SCTPSupport=false,… ` .
268
- When the feature gate is enabled, you can set the ` protocol ` field of a NetworkPolicy to ` SCTP ` .
262
+ NetworkPolicy is defined for [ layer 4] ( https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer )
263
+ connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
264
+ across network plugins.
269
265
270
266
{{< note >}}
271
267
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP
272
268
protocol NetworkPolicies.
273
269
{{< /note >}}
274
270
271
+ When a ` deny all ` network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP
272
+ connections. For other protocols, such as ARP or ICMP, the behaviour is undefined.
273
+ The same applies to allow rules: when a specific pod is allowed as ingress source or egress destination,
274
+ it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
275
+ network plugins and denied by others.
276
+
275
277
## Targeting a range of ports
276
278
277
279
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
@@ -346,6 +348,88 @@ namespaces, the value of the label is the namespace name.
346
348
While NetworkPolicy cannot target a namespace by its name with some object field, you can use the
347
349
standardized label to target a specific namespace.
348
350
351
+ # # Pod lifecycle
352
+
353
+ {{< note >}}
354
+ The following applies to clusters with a conformant networking plugin and a conformant implementation of
355
+ NetworkPolicy.
356
+ {{< /note >}}
357
+
358
+ When a new NetworkPolicy object is created, it may take some time for a network plugin
359
+ to handle the new object. If a pod that is affected by a NetworkPolicy
360
+ is created before the network plugin has completed NetworkPolicy handling,
361
+ that pod may be started unprotected, and isolation rules will be applied when
362
+ the NetworkPolicy handling is completed.
363
+
364
+ Once the NetworkPolicy is handled by a network plugin,
365
+
366
+ 1. All newly created pods affected by a given NetworkPolicy will be isolated before
367
+ they are started.
368
+ Implementations of NetworkPolicy must ensure that filtering is effective throughout
369
+ the Pod lifecycle, even from the very first instant that any container in that Pod is started.
370
+ Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
371
+ sidecar containers, and regular containers.
372
+
373
+ 2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
374
+ In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
375
+ isolation rules were already applied, but no allow rules were applied yet.
376
+
377
+ Every created NetworkPolicy will be handled by a network plugin eventually, but there is no
378
+ way to tell from the Kubernetes API when exactly that happens.
379
+
380
+ Therefore, pods must be resilient against being started up with different network
381
+ connectivity than expected. If you need to make sure the pod can reach certain destinations
382
+ before being started, you can use an [init container](/docs/concepts/workloads/pods/init-containers/)
383
+ to wait for those destinations to be reachable before kubelet starts the app containers.
384
+
385
+ Every NetworkPolicy will be applied to all selected pods eventually.
386
+ Because the network plugin may implement NetworkPolicy in a distributed manner,
387
+ it is possible that pods may see a slightly inconsistent view of network policies
388
+ when the pod is first created, or when pods or policies change.
389
+ For example, a newly-created pod that is supposed to be able to reach both Pod A
390
+ on Node 1 and Pod B on Node 2 may find that it can reach Pod A immediately,
391
+ but cannot reach Pod B until a few seconds later.
392
+
393
+ # # NetworkPolicy and `hostNetwork` pods
394
+
395
+ NetworkPolicy behaviour for `hostNetwork` pods is undefined, but it should be limited to 2 possibilities :
396
+ - The network plugin can distinguish `hostNetwork` pod traffic from all other traffic
397
+ (including being able to distinguish traffic from different `hostNetwork` pods on
398
+ the same node), and will apply NetworkPolicy to `hostNetwork` pods just like it does
399
+ to pod-network pods.
400
+ - The network plugin cannot properly distinguish `hostNetwork` pod traffic,
401
+ and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
402
+ Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
403
+ (This is the most common implementation.)
404
+
405
+ This applies when
406
+ 1. a `hostNetwork` pod is selected by `spec.podSelector`.
407
+
408
+ ` ` ` yaml
409
+ ...
410
+ spec:
411
+ podSelector:
412
+ matchLabels:
413
+ role: client
414
+ ...
415
+ ` ` `
416
+
417
+ 2. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
418
+
419
+ ` ` ` yaml
420
+ ...
421
+ ingress:
422
+ - from:
423
+ - podSelector:
424
+ matchLabels:
425
+ role: client
426
+ ...
427
+ ` ` `
428
+
429
+ At the same time, since `hostNetwork` pods have the same IP addresses as the nodes they reside on,
430
+ their connections will be treated as node connections. For example, you can allow traffic
431
+ from a `hostNetwork` Pod using an `ipBlock` rule.
432
+
349
433
# # What you can't do with network policies (at least, not yet)
350
434
351
435
As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the
0 commit comments