You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/management-paradigms-for-virtual-machines-running-on-kubernetes.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,11 +21,11 @@ But first, a brief introduction to KubeVirt.
21
21
22
22
KubeVirt provide abstractions to Kubernetes users for Linux Kernel Virtual Machines (KVM). KVM has been around for about two decades now with several successful commercial hypervisors built around the implementation and is at this point considered mature.
23
23
24
-
KubeVirt itself does not have a user interface which most VM administrators are used to. The point of abstraction is through standard Kubernetes tools by manipulating API resources of different Kinds provided by Custom Resource Definitions (CRD).
24
+
KubeVirt itself does not have a user interface which most VM administrators are used to. The point of abstraction is through standard Kubernetes tools by manipulating API resources of different Kinds provided by \`CustomResourceDefinitions\` (CRDs).
25
25
26
-
The CRDs allows users to manage VM resources through a set of KubeVirt’s controllers.
26
+
The \`CRDs\` allows users to manage VM resources through a set of KubeVirt’s controllers.
27
27
28
-
insert diagram here
28
+

29
29
30
30
Deploying KubeVirt on upstream Kubernetes and other distributions is straightforward. The [official documentation](https://kubevirt.io/user-guide/) walks through the different distributions and platform specific quirks that needs to be considered.
31
31
@@ -35,9 +35,9 @@ Most VM administrators connect VMs to existing networks that assign IP addresses
35
35
36
36
As a prerequisite for this exercise and examples, the following resources have been created prior:
37
37
38
-
- An SSH public key has been created on the cluster as a `Secret` to be injected into my VM instance during initialization.
39
-
- A `NodeNetworkConfigurationPolicy` using the Kubernetes NMState Operator that creates a bridge on NIC connected to the data center management network.
40
-
- A `NetworkAttachmentDefinition` in my VM instance `Namespace` to connect virtual NICs to.
38
+
* An SSH public key has been created on the cluster as a `Secret` to be injected into my VM instance during initialization.
39
+
* A `NodeNetworkConfigurationPolicy` using the Kubernetes NMState Operator that creates a bridge on NIC connected to the data center management network.
40
+
* A `NetworkAttachmentDefinition` in my VM instance `Namespace` to connect virtual NICs to.
41
41
42
42
For the sake of completeness, this is what those resources look like:
43
43
@@ -235,11 +235,11 @@ Fortunately, there are KubeVirt implementations that heavily focus on a graphica
235
235
236
236
We’ll take a closer look at OKD, the upstream Kubernetes distribution of OpenShift, and Harvester, an HCI solution built for VMs on KubeVirt with striking simplicity.
OKD is the upstream open source project of Red Hat OpenShift. Enabling virtualization is a two-click operation and considered the gold standard for managing VMs and containers with a unified control plane. KubeVirt has been part of OKD and OpenShift since 2020.
Harvester is an open source Hyper Converged Infrastructure (HCI) solution primarily focused on running a highly opinionated stack of software and tools on Kubernetes designed solely for running VMs. Harvester can be consumed by Rancher to allow Rancher to deploy and manage Kubernetes clusters on Harvester in a symbiotic relationship.
There are a couple of distinct patterns for managing cloud compute instances (VMs on KubeVirt in this case) with Ansible.
261
261
262
-
-Declaratively CRUD (Create, Read, Update Delete) the instances from a pre-rendered inventory, preferable templatized with Ansible, idempotent with desired parameters. Manage the OS and apps with playbooks using the rendered inventory.
263
-
-Imperatively CRUD the instances with some other tooling, either from the cloud provider directly or idempotent with something like OpenTofu. Employ dynamic inventory plugins to manage the OS and apps inside the instances.
264
-
-Imperatively CRUD the instances with Ansible playbooks and using a dynamic inventory plugin to manage OS and apps.
262
+
* Declaratively CRUD (Create, Read, Update Delete) the instances from a pre-rendered inventory, preferable templatized with Ansible, idempotent with desired parameters. Manage the OS and apps with playbooks using the rendered inventory.
263
+
* Imperatively CRUD the instances with some other tooling, either from the cloud provider directly or idempotent with something like OpenTofu. Employ dynamic inventory plugins to manage the OS and apps inside the instances.
264
+
* Imperatively CRUD the instances with Ansible playbooks and using a dynamic inventory plugin to manage OS and apps.
265
265
266
266
For the sake of simplicity and clarity the examples will imperatively CRUD the instances and showcase the dynamic inventory plugin with KubeVirt. In a production scenario where collaboration among engineers is required, the first option is the more elegant choice.
0 commit comments