You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/blog/_posts/2024-02-22-diy-create-your-own-cloud-with-kubernetes-part-2/index.md
+11-10Lines changed: 11 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Although the situation improves year by year, we often encounter vulnerabilities
19
19
20
20
On the other hand, Kubernetes was not originally designed to be multi-tenant system, meaning the basic usage pattern involves creating a separate Kubernetes cluster for every independent project and development team.
21
21
22
-
Virtual machines are the primary means of isolating tenants from each other in a cloud. In virtual machines, users are allowed to run any code with administrative rights, but this don't affect other tenants or the cloud system itself. In other words, virtual machines allows to achieve [hard multi-tenancy isolation](https://kubernetes.io/docs/concepts/security/multi-tenancy/#isolation), and run in environments where tenants do not trust each other.
22
+
Virtual machines are the primary means of isolating tenants from each other in a cloud. In virtual machines, users are allowed to run any code with administrative rights, but this don't affect other tenants or the cloud system itself. In other words, virtual machines allows to achieve [hard multi-tenancy isolation](/docs/concepts/security/multi-tenancy/#isolation), and run in environments where tenants do not trust each other.
23
23
24
24
## Virtualization technologies in Kubernetes
25
25
@@ -28,12 +28,12 @@ are the most popular ones. But you should know that they work differently.
28
28
29
29
**Kata Containers** implements the CRI (Container Runtime Interface) and provides an additional level of isolation for standard containers by running them in virtual machines. But they work in a same single Kubernetes-cluster.
30
30
31
-

31
+
{{< figure src="kata-containers.svg" caption="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" alt="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" >}}
32
32
33
33
**KubeVirt** allows to run of traditional virtual machines using the Kubernetes API. KubeVirt virtual machines are run as regular linux processes in containers. In other words, in KubeVirt a container used simple as a sandbox for running virtual machine (QEMU) processes.
34
34
This can be clearly seen by looking at how live migration of virtual machines is implemented in KubeVirt. When migration is needed, the virtual machine moved from one container to another.
35
35
36
-

36
+
{{< figure src="kubevirt-migration.svg" caption="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" alt="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" >}}
37
37
38
38
There is also an alternative project - [Virtink](https://github.com/smartxworks/virtink), which implements lightweight virtualization using [Cloud-Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) and is initially focused on running virtual Kubernetes clusters using the Cluster API.
39
39
@@ -58,15 +58,15 @@ Using block devices for virtual machines eliminates the need for an additional a
58
58
59
59
The storage system can be external or internal (in the case of hyper-converged infrastructure). Using external storage in many cases makes the whole system more stable, as your data is stored on separately from compute nodes.
60
60
61
-

61
+
{{< figure src="storage-external.svg" caption="A diagram showing external data storage communication with the compute nodes" alt="A diagram showing external data storage communication with the compute nodes" >}}
62
62
63
63
External storage solutions are often popular in enterprise systems because such storage is frequently provided by an external vendor, that takes care of its operations. The integration with Kubernetes involves only a small component installed in the cluster - the CSI driver. This driver is responsible for provisioning volumes in this storage and attaching them to pods run by Kubernetes. However, such storage solutions can also be implemented using purely open-source technologies. One of the popular solutions is [TrueNAS](https://www.truenas.com/) powered by [democratic-csi](https://github.com/democratic-csi/democratic-csi) driver.
64
64
65
-

65
+
{{< figure src="storage-local.svg" caption="A diagram showing local data storage running on the compute nodes" alt="A diagram showing local data storage running on the compute nodes" >}}
66
66
67
67
On the other hand, hyper-converged systems are often implemented using local storage (when you do not need replication) and with software-defined storages, often installed directly in Kubernetes, such as [Rook/Ceph](https://rook.io/), [OpenEBS](https://openebs.io/), [Longhorn](https://longhorn.io/), [LINSTOR](https://linbit.com/linstor/), and others.
68
68
69
-

69
+
{{< figure src="storage-clustered.svg" caption="A diagram showing clustered data storage running on the compute nodes" alt="A diagram showing clustered data storage running on the compute nodes" >}}
70
70
71
71
A hyper-converged system has its advantages, e.g. data locality: when your data is stored locally, access to such data is faster, but there are disadvantages as such a system is usually more difficult to manage and maintain.
72
72
@@ -85,15 +85,15 @@ Despite having the similar interface - CNI. The network architecture in Kubernet
85
85
86
86
The network through which nodes are interconnected with each other. This network is usually not managed by Kubernetes, but it is an important because without it, nothing would work. In practice, the bare metal infrastructure usually have more than one of such networks e.g. one for node-to-node communication, second for storage replication, third for external access, etc.
87
87
88
-

88
+
{{< figure src="net-nodes.svg" caption="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" alt="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" >}}
89
89
90
90
Configuring the physical network interaction between nodes goes beyond the scope of this article, as in most situations, Kubernetes utilizes already existing network infrastructure.
91
91
92
92
### Pod Network
93
93
94
94
This is the network provided by your CNI plugin. The task of the CNI plugin is to ensure transparent connectivity between all containers and nodes in the cluster. Most CNI plugins implement a flat network from which separate blocks of IP addresses are allocated for use on each node.
95
95
96
-

96
+
{{< figure src="net-pods.svg" caption="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" >}}
97
97
98
98
In practice, your cluster can have several CNI plugins managed by [Multus](https://github.com/k8snetworkplumbingwg/multus-cni). This approach is often used in virtualization solutions based on KubeVirt - [Rancher](https://www.rancher.com/) and [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization). The primary CNI plugin is used for integration with Kubernetes services, while additional CNI plugins are used to implement private networks (VPC) and integration with the physical networks of your data center.
99
99
@@ -112,7 +112,8 @@ Contrary to traditional virtual machines, Kubernetes originally designed to run
112
112
And the services network provides a convenient abstraction (stable IP addresses and DNS names) that will always direct traffic to the correct pod.
113
113
The same approach is also commonly used with virtual machines in clouds despite the fact that their IPs are usually static.
{{< figure src="net-services.svg" caption="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" >}}
116
+
116
117
117
118
The implementation of the services network in Kubernetes is handled by the services network plugin, The standard implementation is called **kube-proxy** and is used in most clusters.
118
119
But nowadays this functionality might be provided as part of the CNI plugin. The most advanced implementation is offered by the [Cilium](https://cilium.io/) project, which can be run in kube-proxy replacement mode.
@@ -131,7 +132,7 @@ For bare metal Kubernetes clusters, there are several load balancers available:
131
132
The role of a external load balancer is to provide a stable address available externally and direct external traffic to the services network.
132
133
The services network plugin will direct it to your pods and virtual machines as usual.
133
134
134
-

135
+
{{< figure src="net-services.svg" caption="A diagram showing the role of the external load balancer on the Kubernetes network scheme" alt="The role of the external load balancer on the Kubernetes network scheme" >}}
135
136
136
137
In most cases, setting up a load balancer on bare metal is achieved by creating floating IP address on the nodes within the cluster, and announce it externally using ARP/NDP or BGP protocols.
0 commit comments