|
| 1 | +--- |
| 2 | +title: Management paradigms for virtual machines running on Kubernetes |
| 3 | +date: 2025-03-12T20:13:53.863Z |
| 4 | +featuredBlog: true |
| 5 | +author: Michael Mattsson |
| 6 | +authorimage: /img/portrait-192.png |
| 7 | +disable: false |
| 8 | +tags: |
| 9 | + - kubernetes |
| 10 | + - virtualization |
| 11 | + - ansible |
| 12 | + - kubevirt |
| 13 | +--- |
| 14 | +<style> li { font-size: 27px; line-height: 33px; max-width: none; } </style> |
| 15 | +With the rise of virtual machine containerization, it’s imperative to familiarize ourselves with the different aspects of performing VM management on Kubernetes. From crude CLIs, to declarative GitOps patterns, and further extending to lush UIs where your next VM is just a right-click away, having a handle on each of these disciplines is essential for Kubernetes VM management regardless of which role you're in. |
| 16 | + |
| 17 | +Whether you're a classic sysadmin, site reliability engineer (SRE) or have any kind of role in IT operations touching virtualization, the winds of change are catching up. Collectively we need to re-evaluate the VM estate, understand platform requirements for mission-critical applications and look for alternatives with the least amount of friction and resistance to ease migration. |
| 18 | + |
| 19 | +KubeVirt, an open source project governed by the Cloud-Native Computing Foundation (CNCF), is an add-on for Kubernetes that allows management of virtual machines alongside containers using a single API endpoint. KubeVirt is where a large chunk of the market is gravitating towards, whether the abstractions are disguised by a glossy frontend or deployed manually on existing Kubernetes clusters, KubeVirt needs to be considered for any new virtualization project. |
| 20 | + |
| 21 | +This blog post brushes over the basics in VM management on KubeVirt covering the most common patterns to give you an idea of what tools and processes to adopt in your organization, like declarative CLIs, imperative GUIs or idempotent IT platform automation tools such as Ansible. There are strengths and weaknesses across the different interfaces but understanding how to operate them is fundamental for any VM management journey with KubeVirt. |
| 22 | + |
| 23 | +But first, let me give you a brief introduction to KubeVirt. |
| 24 | + |
| 25 | +# A KubeVirt crash course |
| 26 | + |
| 27 | +KubeVirt provides abstractions to Kubernetes users for Linux Kernel Virtual Machines (KVM). KVM has been around for about two decades now with several successful commercial hypervisors built around the implementation and is, at this point, considered mature. |
| 28 | + |
| 29 | +KubeVirt itself does not have a user interface that most VM administrators are used to. The point of abstraction is through standard Kubernetes tools by manipulating API resources of different `Kinds` provided by `CustomResourceDefinitions` (CRDs). |
| 30 | + |
| 31 | +The `CRDs` allow users to manage VM resources through a set of KubeVirt’s controllers. |
| 32 | + |
| 33 | + |
| 34 | + |
| 35 | +Deploying KubeVirt on upstream Kubernetes and other distributions is straightforward. The [official documentation](https://kubevirt.io/user-guide/) walks through the different distributions and platform specific quirks that needs to be considered. |
| 36 | + |
| 37 | +The examples below use KubeVirt provided by the KubeVirt HyperConverged Cluster Operator installed on OKD, the community distribution of Kubernetes that powers Red Hat OpenShift. |
| 38 | + |
| 39 | +Most VM administrators connect VMs to existing networks that assign IP addresses and DNS names. Having the VM immediately reachable from your desktop computer or other already established infrastructure management tools makes the transition from legacy VM management platforms to KubeVirt much smoother. |
| 40 | + |
| 41 | +As a prerequisite for this exercise and examples, the following resources have been created prior: |
| 42 | + |
| 43 | +* An SSH public key has been created on the cluster as a `Secret` to be injected into the VM instance during initialization. |
| 44 | +* A `NodeNetworkConfigurationPolicy` using the Kubernetes NMState Operator that creates a bridge on a NIC connected to the data center management network. |
| 45 | +* A `NetworkAttachmentDefinition` in my VM instance `Namespace` to connect virtual NICs to. |
| 46 | + |
| 47 | +For the sake of completeness, this is what those resources look like: |
| 48 | + |
| 49 | +```yaml |
| 50 | +apiVersion: v1 |
| 51 | +kind: Namespace |
| 52 | +metadata: |
| 53 | + name: hpe-vmi |
| 54 | +--- |
| 55 | +apiVersion: v1 |
| 56 | +kind: Secret |
| 57 | +metadata: |
| 58 | + name: desktop |
| 59 | + namespace: hpe-vmi |
| 60 | +stringData: |
| 61 | + key: ssh-rsa <public key string> you@yourdesktop |
| 62 | +--- |
| 63 | +apiVersion: nmstate.io/v1 |
| 64 | +kind: NodeNetworkConfigurationPolicy |
| 65 | +metadata: |
| 66 | + name: br0-ens224 |
| 67 | +spec: |
| 68 | + nodeSelector: |
| 69 | + node-role.kubernetes.io/worker: "" |
| 70 | + maxUnavailable: 3 |
| 71 | + desiredState: |
| 72 | + interfaces: |
| 73 | + - name: br0 |
| 74 | + type: linux-bridge |
| 75 | + state: up |
| 76 | + ipv4: |
| 77 | + enabled: false |
| 78 | + bridge: |
| 79 | + options: |
| 80 | + stp: |
| 81 | + enabled: false |
| 82 | + port: |
| 83 | + - name: ens224 |
| 84 | +--- |
| 85 | +apiVersion: "k8s.cni.cncf.io/v1" |
| 86 | +kind: NetworkAttachmentDefinition |
| 87 | +metadata: |
| 88 | + name: mgmt |
| 89 | + namespace: hpe-vmi |
| 90 | + annotations: |
| 91 | + k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 |
| 92 | +spec: |
| 93 | + config: | |
| 94 | + { |
| 95 | + "cniVersion": "0.3.1", |
| 96 | + "name": "mgmt", |
| 97 | + "type": "cnv-bridge", |
| 98 | + "bridge": "br0", |
| 99 | + "ipam": {}, |
| 100 | + "macspoofchk": true, |
| 101 | + "preserveDefaultVlan": false |
| 102 | + } |
| 103 | +``` |
| 104 | +
|
| 105 | +Another essential prerequisite is that a `StorageClass` exists on the cluster that supports KubeVirt. The examples below use the HPE CSI Driver for Kubernetes but that could be swapped out for any vendor or platform supporting the bare minimum requirements for KubeVirt (see the KubeVirt [admin guide](https://kubevirt.io/user-guide/storage/clone_api/) for details). |
| 106 | + |
| 107 | +Now that the environment is primed, let’s provision a VM and take KubeVirt for a spin. |
| 108 | + |
| 109 | +# The command line interface |
| 110 | + |
| 111 | +It is entirely possible to use `kubectl` out-of-the-box to deploy and manage VM resources. The `virtctl` CLI features a more rich experience with the ability to upload disk images, connect to the VM console and manage power states more easily. The most important task of `virtctl` is to render tedious manifests from just a few arguments to deploy new VMs. |
| 112 | + |
| 113 | +Installing `virtctl` varies by platform and KubeVirt distribution. It’s advised at this point to have the same client and server version, which at the time of writing is 1.4.0. If using a Mac with Brew installed, it’s simply: |
| 114 | + |
| 115 | +```shell |
| 116 | +brew install virtctl |
| 117 | +``` |
| 118 | + |
| 119 | +First we need to inform ourselves what `DataSources` are available on the cluster. Building new `DataSources` or importing new ones is out of scope for this blog post. VMs are cloned into new `PersistentVolumeClaims` (PVCs) from `DataSources`. List the existing the `DataSources` on the cluster: |
| 120 | + |
| 121 | +```shell |
| 122 | +$ kubectl get datasources -A |
| 123 | +NAMESPACE NAME AGE |
| 124 | +kubevirt-os-images centos-stream8 13h |
| 125 | +kubevirt-os-images centos-stream9 13h |
| 126 | +kubevirt-os-images centos6 13h |
| 127 | +kubevirt-os-images centos7 13h |
| 128 | +kubevirt-os-images fedora 13h |
| 129 | +kubevirt-os-images opensuse 13h |
| 130 | +kubevirt-os-images rhel7 13h |
| 131 | +kubevirt-os-images rhel8 13h |
| 132 | +kubevirt-os-images rhel9 13h |
| 133 | +kubevirt-os-images ubuntu 13h |
| 134 | +kubevirt-os-images win10 13h |
| 135 | +kubevirt-os-images win11 13h |
| 136 | +kubevirt-os-images win2k16 13h |
| 137 | +kubevirt-os-images win2k19 13h |
| 138 | +kubevirt-os-images win2k22 13h |
| 139 | +``` |
| 140 | + |
| 141 | +Not all `DataSources` are populated by default. On OKD, only “fedora” and “centos-stream9” are available. It can be checked by examining `DataImportCrons`. |
| 142 | + |
| 143 | +```shell |
| 144 | +$ kubectl get dataimportcrons -A |
| 145 | +NAMESPACE NAME FORMAT |
| 146 | +kubevirt-os-images centos-stream9-image-cron pvc |
| 147 | +kubevirt-os-images fedora-image-cron pvc |
| 148 | +``` |
| 149 | + |
| 150 | +Let’s create a Fedora VM, assign the SSH public key and connect it to the management LAN, but first create a manifest named “my-network.yaml” to describe the network we want to connect the VM to. |
| 151 | + |
| 152 | +```yaml |
| 153 | +spec: |
| 154 | + template: |
| 155 | + spec: |
| 156 | + domain: |
| 157 | + devices: |
| 158 | + interfaces: |
| 159 | + - bridge: {} |
| 160 | + model: virtio |
| 161 | + name: my-vnic-0 |
| 162 | + networks: |
| 163 | + - multus: |
| 164 | + networkName: mgmt |
| 165 | + name: my-vnic-0 |
| 166 | +``` |
| 167 | + |
| 168 | +Now, create the VM and attach it to the network: |
| 169 | + |
| 170 | +```shell |
| 171 | +virtctl create vm --name my-vm-0 \ |
| 172 | + --access-cred type:ssh,src:desktop,user:fedora \ |
| 173 | + --volume-import=type:ds,src:kubevirt-os-images/fedora,size:64Gi \ |
| 174 | +| kubectl create -n hpe-vmi -f- && \ |
| 175 | +kubectl patch vm/my-vm-0 -n hpe-vmi --type=merge --patch-file my-network.yaml |
| 176 | +``` |
| 177 | + |
| 178 | +Monitor the progress of the VM: |
| 179 | + |
| 180 | +```shell |
| 181 | +$ kubectl get vm -n hpe-vmi -w |
| 182 | +NAME AGE STATUS READY |
| 183 | +my-vm-0 13s Provisioning False |
| 184 | +my-vm-0 29s Starting False |
| 185 | +my-vm-0 42s Running False |
| 186 | +my-vm-0 42s Running True |
| 187 | +``` |
| 188 | + |
| 189 | +Once the VM is running, it’s possible to login with the SSH identity and hostname given to the VM (assuming DHCP registers the hostname in DNS on the management network). |
| 190 | + |
| 191 | +```shell |
| 192 | +$ ssh fedora@my-vm-0 |
| 193 | +[fedora@my-vm-0 ~]$ |
| 194 | +``` |
| 195 | + |
| 196 | +So, what does the VM instance actually look like? Let’s install some tools and inspect. |
| 197 | + |
| 198 | +```shell |
| 199 | +$ sudo dnf install -yq fastfetch virt-what |
| 200 | +$ sudo virt-what |
| 201 | +redhat |
| 202 | +kvm |
| 203 | +$ fastfetch --pipe --localip-show-ipv4 false |
| 204 | + .',;::::;,'. fedora@my-vm-0 |
| 205 | + .';:cccccccccccc:;,. -------------- |
| 206 | + .;cccccccccccccccccccccc;. OS: Fedora Linux 41 (Cloud Edition) x86_64 |
| 207 | + .:cccccccccccccccccccccccccc:. Host: KubeVirt (RHEL-9.4.0 PC (Q35 + ICH9, 2009)) |
| 208 | + .;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: Linux 6.11.4-301.fc41.x86_64 |
| 209 | + .:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 7 mins |
| 210 | +.:ccccccccccccc;KMMc;cc;xMMc;ccccccc:. Packages: 550 (rpm) |
| 211 | +,cccccccccccccc;MMM.;cc;;WW:;cccccccc, Shell: bash 5.2.32 |
| 212 | +:cccccccccccccc;MMM.;cccccccccccccccc: Terminal: /dev/pts/0 |
| 213 | +:ccccccc;oxOOOo;MMM000k.;cccccccccccc: CPU: Intel Core (Haswell, no TSX, IBRS) @ 2.60 GHz |
| 214 | +cccccc;0MMKxdd:;MMMkddc.;cccccccccccc; GPU: Unknown Device 1111 (VGA compatible) |
| 215 | +ccccc;XMO';cccc;MMM.;cccccccccccccccc' Memory: 435.27 MiB / 3.80 GiB (11%) |
| 216 | +ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Swap: 0 B / 3.80 GiB (0%) |
| 217 | +ccccc;0MNc.ccc.xMMd;ccccccccccccccc; Disk (/): 805.66 MiB / 62.92 GiB (1%) - btrfs |
| 218 | +cccccc;dNMWXXXWM0:;cccccccccccccc:, Locale: en_US.UTF-8 |
| 219 | +cccccccc;.:odl:.;cccccccccccccc:,. |
| 220 | +ccccccccccccccccccccccccccccc:'. |
| 221 | +:ccccccccccccccccccccccc:;,.. |
| 222 | + ':cccccccccccccccc::;,. |
| 223 | +``` |
| 224 | + |
| 225 | +Except for the “Host” hint, this looks like any VM instance on a KVM hypervisor. |
| 226 | + |
| 227 | +With `virtctl` it’s possible to live migrate, pause/unpause, stop/start and restart the VM. Deleting the VM requires `kubectl`. |
| 228 | + |
| 229 | +```shell |
| 230 | +kubectl delete -n hpe-vmi vm/my-vm-0 |
| 231 | +``` |
| 232 | + |
| 233 | +This will remove all resources created with `virtctl`, including `PVCs`. |
| 234 | + |
| 235 | +# User experience with web user interfaces |
| 236 | + |
| 237 | +KubeVirt does not have an official graphical user interface. That is a tall threshold for new users who are familiar with legacy VM management solutions where everything is a right-click away, structured in an intuitive manner. In a way, the KubeVirt project assumes that the user has a fundamental knowledge of KVM and can scrape through by managing Kubernetes resources through the CLI. |
| 238 | + |
| 239 | +Fortunately, there are KubeVirt implementations that heavily focus on a graphical user experience and provide a great way to learn and explore the capabilities, very similar to legacy hypervisors. |
| 240 | + |
| 241 | +Let's take a closer look at OKD, the upstream Kubernetes distribution of OpenShift, and Harvester, an Hyper Converged Infrastructure (HCI) solution built for VMs on KubeVirt with striking simplicity. |
| 242 | + |
| 243 | + |
| 244 | + |
| 245 | +OKD is the upstream open source project of Red Hat OpenShift. Enabling virtualization is a two-click operation and considered the gold standard for managing VMs and containers with a unified control plane. KubeVirt has been part of OKD and OpenShift since 2020. |
| 246 | + |
| 247 | + |
| 248 | + |
| 249 | +Harvester is an open source HCI solution primarily focused on running a highly opinionated stack of software and tools on Kubernetes designed solely for running VMs. Harvester can be consumed by Rancher to allow Rancher to deploy and manage Kubernetes clusters on Harvester in a symbiotic relationship. |
| 250 | + |
| 251 | +Walking through the UIs are out of scope for this blog post but the same outcomes can be accomplished in a few clicks similar to using the CLI with `virtctl` and `kubectl`. |
| 252 | + |
| 253 | +# Ansible |
| 254 | + |
| 255 | +Using CLIs and graphical UIs are great for exploratory administration and one-offs. They’re usually tedious and error prone when it comes to repeating the same set of tasks indefinitely. This is where Ansible comes it. Idempotent and declarative interfaces allow it to distill very complex tasks across multiple layers of infrastructure to gain full control all the way up to deploying the application. This kind of IT automation lends itself to GitOps and self-service patterns in large scale environments. Write once, delegate and reuse with ease, like cookie cutter templates. |
| 256 | + |
| 257 | +Ansible has historically been well integrated with other KVM-based hypervisors such as oVirt/RHEV and provides VM management at scale quite elegantly. |
| 258 | + |
| 259 | +Ansible can be installed on your desktop computer in a multitude of ways and will not be covered in this blog. Once Ansible is in place, install the KubeVirt collection: |
| 260 | + |
| 261 | +```shell |
| 262 | +ansible-galaxy collection install kubevirt.core |
| 263 | +``` |
| 264 | + |
| 265 | +There are a couple of distinct patterns for managing cloud compute instances (VMs on KubeVirt in this case) with Ansible. |
| 266 | + |
| 267 | +* Declaratively CRUD (Create, Read, Update Delete) the instances from a pre-rendered inventory, preferable templatized with Ansible, idempotent with desired parameters. Manage the OS and apps with playbooks using the rendered inventory. |
| 268 | +* Imperatively CRUD the instances with some other tooling, either from the cloud provider directly or idempotent with something like OpenTofu. Employ dynamic inventory plugins to manage the OS and apps inside the instances. |
| 269 | +* Imperatively CRUD the instances with Ansible playbooks and using a dynamic inventory plugin to manage OS and apps. |
| 270 | + |
| 271 | +For the sake of simplicity and clarity the examples will imperatively CRUD the instances and showcase the dynamic inventory plugin with KubeVirt. In a production scenario where collaboration among engineers is required, the first option is the more elegant choice. |
| 272 | + |
| 273 | +Create a playbook named “create_vm.yaml” or similar. |
| 274 | + |
| 275 | +```yaml |
| 276 | +--- |
| 277 | +- hosts: localhost |
| 278 | + connection: local |
| 279 | + tasks: |
| 280 | + - name: Ensure VM name |
| 281 | + assert: |
| 282 | + that: vm is defined |
| 283 | + - name: Create a VM |
| 284 | + kubevirt.core.kubevirt_vm: |
| 285 | + state: present |
| 286 | + name: "{{ vm }}" |
| 287 | + namespace: hpe-vmi |
| 288 | + labels: |
| 289 | + app: my-example-label |
| 290 | + instancetype: |
| 291 | + name: u1.medium |
| 292 | + preference: |
| 293 | + name: fedora |
| 294 | + data_volume_templates: |
| 295 | + - metadata: |
| 296 | + name: "{{ vm }}-0" |
| 297 | + spec: |
| 298 | + sourceRef: |
| 299 | + kind: DataSource |
| 300 | + name: fedora |
| 301 | + namespace: kubevirt-os-images |
| 302 | + storage: |
| 303 | + resources: |
| 304 | + requests: |
| 305 | + storage: 64Gi |
| 306 | + spec: |
| 307 | + domain: |
| 308 | + devices: |
| 309 | + interfaces: |
| 310 | + - name: mgmt |
| 311 | + bridge: {} |
| 312 | + networks: |
| 313 | + - name: mgmt |
| 314 | + multus: |
| 315 | + networkName: mgmt |
| 316 | + accessCredentials: |
| 317 | + - sshPublicKey: |
| 318 | + propagationMethod: |
| 319 | + qemuGuestAgent: |
| 320 | + users: |
| 321 | + - fedora |
| 322 | + source: |
| 323 | + secret: |
| 324 | + secretName: desktop |
| 325 | + volumes: |
| 326 | + - cloudInitConfigDrive: |
| 327 | + userData: |- |
| 328 | + #cloud-config |
| 329 | + # The default username is: fedora |
| 330 | + runcmd: |
| 331 | + - [ setsebool, -P, 'virt_qemu_ga_manage_ssh', 'on' ] |
| 332 | + name: cloudinitdisk |
| 333 | + - dataVolume: |
| 334 | + name: "{{ vm }}-0" |
| 335 | + name: "{{ vm }}-0" |
| 336 | + wait: yes |
| 337 | +``` |
| 338 | + |
| 339 | +Many attributes have been hardcoded in this example but it illustrates the similarities of what `virtctl` outputs based on the parameters provided. |
| 340 | + |
| 341 | +Use the playbook to create a VM: |
| 342 | + |
| 343 | +```shell |
| 344 | +ansible-playbook -e vm=my-vm-0 create_vm.yaml |
| 345 | +``` |
| 346 | + |
| 347 | +It takes a minute or so for the VM to come up. When the prompt comes back, create a file named “hosts.kubevirt.yaml” (the “kubevirt.yaml” part of the filename is mandatory): |
| 348 | + |
| 349 | +```yaml |
| 350 | +plugin: kubevirt.core.kubevirt |
| 351 | +namespaces: |
| 352 | + - hpe-vmi |
| 353 | +host_format: "{name}" |
| 354 | +network_name: mgmt |
| 355 | +label_selector: app=my-example-label |
| 356 | +compose: |
| 357 | + ansible_user: "'fedora'" |
| 358 | +``` |
| 359 | + |
| 360 | +It’s now possible to use the KubeVirt inventory plugin to manage the OS and apps in the VM. Let’s see if it connects: |
| 361 | + |
| 362 | +```shell |
| 363 | +ansible -i hosts.kubevirt.yaml -m ping my-vm-0 |
| 364 | +my-vm-0 | SUCCESS => { |
| 365 | + "ansible_facts": { |
| 366 | + "discovered_interpreter_python": "/usr/bin/python3.13" |
| 367 | + }, |
| 368 | + "changed": false, |
| 369 | + "ping": "pong" |
| 370 | +} |
| 371 | +``` |
| 372 | + |
| 373 | +At this point it’s possible to manage the VM like any other host provisioned on any kind of server, hypervisor or cloud platform. |
| 374 | + |
| 375 | +# Summary |
| 376 | + |
| 377 | +It doesn’t matter what your distinct VM management workflow looks like, KubeVirt serves all popular patterns. That said, current tools and processes will require an overhaul. Why not switch to idempotent VM management through GitOps while transitioning from your legacy hypervisor in the meantime? That's a topic for another day. |
| 378 | + |
| 379 | +Connect with the HPE Developer Community via [Slack](https://developer.hpe.com/slack-signup/) or sign up for the [Munch & Learn Technology Talks](https://developer.hpe.com/campaign/munch-and-learn/) to immerse yourself in the latest breakthrough technologies from HPE, customers, and partners. |
0 commit comments