@@ -145,7 +145,6 @@ spec:
145
145
targetPort: http-web-svc
146
146
` ` `
147
147
148
-
149
148
This works even if there is a mixture of Pods in the Service using a single
150
149
configured name, with the same network protocol available via different
151
150
port numbers. This offers a lot of flexibility for deploying and evolving
@@ -353,7 +352,7 @@ thus is only available to use as-is.
353
352
354
353
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
355
354
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy
356
- effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
355
+ effectively deprecates the behavior for almost all of the flags for the kube-proxy.
357
356
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
358
357
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
359
358
For example, if your operating system doesn't allow you to run iptables commands,
@@ -420,7 +419,7 @@ The IPVS proxy mode is based on netfilter hook function that is similar to
420
419
iptables mode, but uses a hash table as the underlying data structure and works
421
420
in the kernel space.
422
421
That means kube-proxy in IPVS mode redirects traffic with lower latency than
423
- kube-proxy in iptables mode, with much better performance when synchronising
422
+ kube-proxy in iptables mode, with much better performance when synchronizing
424
423
proxy rules. Compared to the other proxy modes, IPVS mode also supports a
425
424
higher throughput of network traffic.
426
425
@@ -662,7 +661,8 @@ Kubernetes `ServiceTypes` allow you to specify what kind of Service you want.
662
661
* [`ExternalName`](#externalname): Maps the Service to the contents of the
663
662
` externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
664
663
with its value. No proxying of any kind is set up.
665
- {{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
664
+ {{< note >}}
665
+ You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
666
666
to use the `ExternalName` type.
667
667
{{< /note >}}
668
668
@@ -740,11 +740,11 @@ kube-proxy only selects the loopback interface for NodePort Services.
740
740
The default for `--nodeport-addresses` is an empty list.
741
741
This means that kube-proxy should consider all available network interfaces for NodePort.
742
742
(That's also compatible with earlier Kubernetes releases.)
743
- Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
744
- and `.spec.clusterIP:spec.ports[*].port`.
743
+ {{< note >}}
744
+ This Service is visible as `<NodeIP>:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`.
745
745
If the `--nodeport-addresses` flag for kube-proxy or the equivalent field
746
746
in the kube-proxy configuration file is set, `<NodeIP>` would be a filtered node IP address (or possibly IP addresses).
747
-
747
+ {{< /note >}}
748
748
749
749
# ## Type LoadBalancer {#loadbalancer}
750
750
@@ -793,7 +793,6 @@ _As an alpha feature_, you can configure a load balanced Service to
793
793
[omit](#load-balancer-nodeport-allocation) assigning a node port, provided that the
794
794
cloud provider implementation supports this.
795
795
796
-
797
796
{{< note >}}
798
797
799
798
On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need
@@ -1400,7 +1399,7 @@ fail with a message indicating an IP address could not be allocated.
1400
1399
In the control plane, a background controller is responsible for creating that
1401
1400
map (needed to support migrating from older versions of Kubernetes that used
1402
1401
in-memory locking). Kubernetes also uses controllers to check for invalid
1403
- assignments (eg due to administrator intervention) and for cleaning up allocated
1402
+ assignments (e.g. due to administrator intervention) and for cleaning up allocated
1404
1403
IP addresses that are no longer used by any Services.
1405
1404
1406
1405
# ### IP address ranges for `type: ClusterIP` Services {#service-ip-static-sub-range}
@@ -1476,7 +1475,7 @@ through a load-balancer, though in those cases the client IP does get altered.
1476
1475
1477
1476
# ### IPVS
1478
1477
1479
- iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
1478
+ iptables operations slow down dramatically in large scale cluster e.g. 10,000 Services.
1480
1479
IPVS is designed for load balancing and based on in-kernel hash tables.
1481
1480
So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy.
1482
1481
Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms
0 commit comments