You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/architecture/control-plane-node-communication.md
+75-28Lines changed: 75 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
reviewers:
3
3
- dchen1107
4
4
- liggitt
5
-
title: Control Plane-Node Communication
5
+
title: Communication between Nodes and the Control Plane
6
6
content_type: concept
7
7
weight: 20
8
8
aliases:
@@ -11,62 +11,109 @@ aliases:
11
11
12
12
<!-- overview -->
13
13
14
-
This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
15
-
16
-
14
+
This document catalogs the communication paths between the API server and the Kubernetes cluster.
15
+
The intent is to allow users to customize their installation to harden the network configuration
16
+
such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud
17
+
provider).
17
18
18
19
<!-- body -->
19
20
20
21
## Node to Control Plane
21
-
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
22
-
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
23
22
24
-
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
23
+
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run)
24
+
terminates at the API server. None of the other control plane components are designed to expose
25
+
remote services. The API server is configured to listen for remote connections on a secure HTTPS
26
+
port (typically 443) with one or more forms of client
for automated provisioning of kubelet client certificates.
25
38
26
-
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
27
-
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
39
+
Pods that wish to connect to the API server can do so securely by leveraging a service account so
40
+
that Kubernetes will automatically inject the public root certificate and a valid bearer token
41
+
into the pod when it is instantiated.
42
+
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is
43
+
redirected (via `kube-proxy`) to the HTTPS endpoint on the API server.
28
44
29
-
The control plane components also communicate with the cluster apiserver over the secure port.
45
+
The control plane components also communicate with the API server over the secure port.
30
46
31
-
As a result, the default operating mode for connections from the nodes and pods running on the nodes to the control plane is secured by default and can run over untrusted and/or public networks.
47
+
As a result, the default operating mode for connections from the nodes and pods running on the
48
+
nodes to the control plane is secured by default and can run over untrusted and/or public
49
+
networks.
32
50
33
-
## Control Plane to node
51
+
## Control plane to node
34
52
35
-
There are two primary communication paths from the control plane (apiserver) to the nodes. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver's proxy functionality.
53
+
There are two primary communication paths from the control plane (the API server) to the nodes.
54
+
The first is from the API server to the kubelet process which runs on each node in the cluster.
55
+
The second is from the API server to any node, pod, or service through the API server's _proxy_
56
+
functionality.
36
57
37
-
### apiserver to kubelet
58
+
### API server to kubelet
38
59
39
-
The connections from the apiserver to the kubelet are used for:
60
+
The connections from the API server to the kubelet are used for:
40
61
41
62
* Fetching logs for pods.
42
-
* Attaching (through kubectl) to running pods.
63
+
* Attaching (usually through `kubectl`) to running pods.
43
64
* Providing the kubelet's port-forwarding functionality.
44
65
45
-
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks.
66
+
These connections terminate at the kubelet's HTTPS endpoint. By default, the API server does not
67
+
verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle
68
+
attacks and **unsafe** to run over untrusted and/or public networks.
46
69
47
-
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
70
+
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the API
71
+
server with a root certificate bundle to use to verify the kubelet's serving certificate.
48
72
49
-
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
73
+
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the API server and kubelet if
74
+
required to avoid connecting over an
50
75
untrusted or public network.
51
76
52
-
Finally, [Kubelet authentication and/or authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) should be enabled to secure the kubelet API.
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted or public networks.
81
+
### API server to nodes, pods, and services
82
+
83
+
The connections from the API server to a node, pod, or service default to plain HTTP connections
84
+
and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS
85
+
connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will
86
+
not validate the certificate provided by the HTTPS endpoint nor provide client credentials. So
87
+
while the connection will be encrypted, it will not provide any guarantees of integrity. These
88
+
connections **are not currently safe** to run over untrusted or public networks.
57
89
58
90
### SSH tunnels
59
91
60
-
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel.
61
-
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running.
92
+
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this
93
+
configuration, the API server initiates an SSH tunnel to each node in the cluster (connecting to
94
+
the SSH server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or
95
+
service through the tunnel.
96
+
This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are
97
+
running.
62
98
63
-
SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel.
99
+
{{< note >}}
100
+
SSH tunnels are currently deprecated, so you shouldn't opt to use them unless you know what you
101
+
are doing. The [Konnectivity service](#konnectivity-service) is a replacement for this
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
70
-
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
109
+
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the
110
+
control plane to cluster communication. The Konnectivity service consists of two parts: the
111
+
Konnectivity server in the control plane network and the Konnectivity agents in the nodes network.
112
+
The Konnectivity agents initiate connections to the Konnectivity server and maintain the network
113
+
connections.
114
+
After enabling the Konnectivity service, all control plane to nodes traffic goes through these
115
+
connections.
116
+
117
+
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set
118
+
up the Konnectivity service in your cluster.
71
119
72
-
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster.
0 commit comments