Skip to content

Commit f36d0cf

Browse files
committed
Create Blog “management-paradigms-for-virtual-machines-running-on-kubernetes”
1 parent 207f38c commit f36d0cf

File tree

1 file changed

+374
-0
lines changed

1 file changed

+374
-0
lines changed
Lines changed: 374 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,374 @@
1+
---
2+
title: Management Paradigms for Virtual Machines running on Kubernetes
3+
date: 2025-03-06T20:13:53.863Z
4+
featuredBlog: true
5+
author: Michael Mattsson
6+
authorimage: /img/portrait-192.png
7+
disable: false
8+
tags:
9+
- kubernetes
10+
- virtualization
11+
- ansible
12+
- kubevirt
13+
---
14+
With the rise of virtual machine containerization it’s imperative to familiarize ourselves with the different aspects of performing VM management on Kubernetes. From crude CLIs, to declarative GitOps patterns, and further extending to lush UIs where your next VM is just a right-click away.
15+
16+
This blog post brush over the basics in VM management with the most common patterns to give us an idea of what tools and processes to adopt in your organization.
17+
18+
But first, a brief introduction to KubeVirt.
19+
20+
# A KubeVirt Crash Course
21+
22+
KubeVirt provide abstractions to Kubernetes users for Linux Kernel Virtual Machines (KVM). KVM has been around for about two decades now with several successful commercial hypervisors built around the implementation and is at this point considered mature.
23+
24+
KubeVirt itself does not have a user interface which most VM administrators are used to. The point of abstraction is through standard Kubernetes tools by manipulating API resources of different Kinds provided by Custom Resource Definitions (CRD).
25+
26+
The CRDs allows users to manage VM resources through a set of KubeVirt’s controllers.
27+
28+
insert diagram here
29+
30+
Deploying KubeVirt on upstream Kubernetes and other distributions is straightforward. The [official documentation](https://kubevirt.io/user-guide/) walks through the different distributions and platform specific quirks that needs to be considered.
31+
32+
The examples below uses KubeVirt provided by the KubeVirt HyperConverged Cluster Operator installed on OKD, the upstream project of OpenShift.
33+
34+
Most VM administrators connect VMs to existing networks that assign IP addresses and DNS names. Having the VM immediately reachable from your desktop computer or other already established infrastructure management tools makes the transition from legacy VM management platforms to KubeVirt much smoother.
35+
36+
As a prerequisite for this exercise and examples, the following resources have been created prior:
37+
38+
- An SSH public key has been created on the cluster as a `Secret` to be injected into my VM instance during initialization.
39+
- A `NodeNetworkConfigurationPolicy` using the Kubernetes NMState Operator that creates a bridge on NIC connected to the data center management network.
40+
- A `NetworkAttachmentDefinition` in my VM instance `Namespace` to connect virtual NICs to.
41+
42+
For the sake of completeness, this is what those resources look like:
43+
44+
```yaml
45+
apiVersion: v1
46+
kind: Namespace
47+
metadata:
48+
name: hpe-vmi
49+
---
50+
apiVersion: v1
51+
kind: Secret
52+
metadata:
53+
name: desktop
54+
namespace: hpe-vmi
55+
stringData:
56+
key: ssh-rsa <public key string> you@yourdesktop
57+
---
58+
apiVersion: nmstate.io/v1
59+
kind: NodeNetworkConfigurationPolicy
60+
metadata:
61+
name: br0-ens224
62+
spec:
63+
nodeSelector:
64+
node-role.kubernetes.io/worker: ""
65+
maxUnavailable: 3
66+
desiredState:
67+
interfaces:
68+
- name: br0
69+
type: linux-bridge
70+
state: up
71+
ipv4:
72+
enabled: false
73+
bridge:
74+
options:
75+
stp:
76+
enabled: false
77+
port:
78+
- name: ens224
79+
---
80+
apiVersion: "k8s.cni.cncf.io/v1"
81+
kind: NetworkAttachmentDefinition
82+
metadata:
83+
name: mgmt
84+
namespace: hpe-vmi
85+
annotations:
86+
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0
87+
spec:
88+
config: |
89+
{
90+
"cniVersion": "0.3.1",
91+
"name": "mgmt",
92+
"type": "cnv-bridge",
93+
"bridge": "br0",
94+
"ipam": {},
95+
"macspoofchk": true,
96+
"preserveDefaultVlan": false
97+
}
98+
```
99+
100+
Another essential prerequisite is that a `StorageClass` exist on the cluster which supports KubeVirt. The examples below uses the HPE CSI Driver for Kubernetes but it could be any vendor or platform supporting the bare minimum requires for KubeVirt, see the KubeVirt [admin guide](https://kubevirt.io/user-guide/storage/clone_api/) for the details.
101+
102+
Now, the environment is primed, let’s provision a VM and take KubeVirt for a spin.
103+
104+
# The Command Line Interface
105+
106+
It is entirely possible to use `kubectl` out-of-the-box to deploy and manage VM resources. The `virtctl` CLI feature a more rich experience with the ability to upload disk images, connect to the VM console and manage power states more easily. The most important task of `virtctl` is to render tedious manifests from just a few arguments to deploy new VMs.
107+
108+
Installing `virtctl` varies by platform and KubeVirt distribution. It’s advised at this point to have the same client and server version which at the time of writing is 1.4.0. If using a Mac with Brew installed, it’s simply:
109+
110+
```bash
111+
brew install virtctl
112+
```
113+
114+
First we need to inform ourselves what `DataSources` are available on the cluster. Building new `DataSources` or importing new ones is out of scope for this blog post. VMs are cloned into new `PersistentVolumeClaims` (PVCs) from `DataSources`. List the existing the `DataSources` on the cluster:
115+
116+
```bash
117+
$ kubectl get datasources -A
118+
NAMESPACE NAME AGE
119+
kubevirt-os-images centos-stream8 13h
120+
kubevirt-os-images centos-stream9 13h
121+
kubevirt-os-images centos6 13h
122+
kubevirt-os-images centos7 13h
123+
kubevirt-os-images fedora 13h
124+
kubevirt-os-images opensuse 13h
125+
kubevirt-os-images rhel7 13h
126+
kubevirt-os-images rhel8 13h
127+
kubevirt-os-images rhel9 13h
128+
kubevirt-os-images ubuntu 13h
129+
kubevirt-os-images win10 13h
130+
kubevirt-os-images win11 13h
131+
kubevirt-os-images win2k16 13h
132+
kubevirt-os-images win2k19 13h
133+
kubevirt-os-images win2k22 13h
134+
```
135+
136+
Not all `DataSources` are populated by default. On OKD, only “fedora” and “centos-stream9” are available. It can be checked by examining `DataImportCrons`.
137+
138+
```bash
139+
$ kubectl get dataimportcrons -A
140+
NAMESPACE NAME FORMAT
141+
kubevirt-os-images centos-stream9-image-cron pvc
142+
kubevirt-os-images fedora-image-cron pvc
143+
```
144+
145+
Let’s create a Fedora VM, assign the SSH public key and connect it to the management LAN, but first create a manifest named “my-network.yaml” to describe the network we want to connect the VM to.
146+
147+
```yaml
148+
spec:
149+
template:
150+
spec:
151+
domain:
152+
devices:
153+
interfaces:
154+
- bridge: {}
155+
model: virtio
156+
name: my-vnic-0
157+
networks:
158+
- multus:
159+
networkName: mgmt
160+
name: my-vnic-0
161+
```
162+
163+
Now, create the VM and attach it to the network:
164+
165+
```bash
166+
virtctl create vm --name my-vm-0 \
167+
--access-cred type:ssh,src:desktop,user:fedora \
168+
--volume-import=type:ds,src:kubevirt-os-images/fedora,size:64Gi \
169+
| kubectl create -n hpe-vmi -f- && \
170+
kubectl patch vm/my-vm-0 -n hpe-vmi --type=merge --patch-file my-network.yaml
171+
```
172+
173+
Monitor the progress of the VM:
174+
175+
```bash
176+
$ kubectl get vm -n hpe-vmi -w
177+
NAME AGE STATUS READY
178+
my-vm-0 13s Provisioning False
179+
my-vm-0 29s Starting False
180+
my-vm-0 42s Running False
181+
my-vm-0 42s Running True
182+
```
183+
184+
Once the VM is running, it’s possible to login with the SSH identity and hostname given to the VM (assuming DHCP registers the hostname in DNS on the management network).
185+
186+
```bash
187+
$ ssh fedora@my-vm-0
188+
[fedora@my-vm-0 ~]$
189+
```
190+
191+
So, what does the VM instance actually look like? Let’s install some tools and inspect.
192+
193+
```bash
194+
$ sudo dnf install -yq fastfetch virt-what
195+
$ sudo virt-what
196+
redhat
197+
kvm
198+
$ fastfetch --pipe --localip-show-ipv4 false
199+
.',;::::;,'. fedora@my-vm-0
200+
.';:cccccccccccc:;,. --------------
201+
.;cccccccccccccccccccccc;. OS: Fedora Linux 41 (Cloud Edition) x86_64
202+
.:cccccccccccccccccccccccccc:. Host: KubeVirt (RHEL-9.4.0 PC (Q35 + ICH9, 2009))
203+
.;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: Linux 6.11.4-301.fc41.x86_64
204+
.:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 7 mins
205+
.:ccccccccccccc;KMMc;cc;xMMc;ccccccc:. Packages: 550 (rpm)
206+
,cccccccccccccc;MMM.;cc;;WW:;cccccccc, Shell: bash 5.2.32
207+
:cccccccccccccc;MMM.;cccccccccccccccc: Terminal: /dev/pts/0
208+
:ccccccc;oxOOOo;MMM000k.;cccccccccccc: CPU: Intel Core (Haswell, no TSX, IBRS) @ 2.60 GHz
209+
cccccc;0MMKxdd:;MMMkddc.;cccccccccccc; GPU: Unknown Device 1111 (VGA compatible)
210+
ccccc;XMO';cccc;MMM.;cccccccccccccccc' Memory: 435.27 MiB / 3.80 GiB (11%)
211+
ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Swap: 0 B / 3.80 GiB (0%)
212+
ccccc;0MNc.ccc.xMMd;ccccccccccccccc; Disk (/): 805.66 MiB / 62.92 GiB (1%) - btrfs
213+
cccccc;dNMWXXXWM0:;cccccccccccccc:, Locale: en_US.UTF-8
214+
cccccccc;.:odl:.;cccccccccccccc:,.
215+
ccccccccccccccccccccccccccccc:'.
216+
:ccccccccccccccccccccccc:;,..
217+
':cccccccccccccccc::;,.
218+
```
219+
220+
Except for the “Host” hint, this looks like any VM instance on a KVM hypervisor.
221+
222+
With `virtctl` it’s possible to live migrate, pause/unpause, stop/start and restart the VM. Deleting the VM requires `kubectl`.
223+
224+
```bash
225+
kubectl delete -n hpe-vmi vm/my-vm-0
226+
```
227+
228+
This will remove all resources created with `virtctl`, including `PVCs`.
229+
230+
# UX with Web UIs
231+
232+
KubeVirt does not have an official graphical user interface. That is a tall threshold for new users who are familiar with legacy VM management solutions where everything is a right-click away, structured in an intuitive manner. In a way, the KubeVirt project assumes the user to have fundamental KVM knowledge and able to scrape by managing Kubernetes resources through the CLI.
233+
234+
Fortunately, there are KubeVirt implementations that heavily focus on a graphical user experience and provide a great way to learn and explore the capabilities, very similar to legacy hypervisors.
235+
236+
We’ll take a closer look at OKD, the upstream Kubernetes distribution of OpenShift, and Harvester, an HCI solution built for VMs on KubeVirt with striking simplicity.
237+
238+
OKD screenshot
239+
240+
OKD is the upstream open source project of Red Hat OpenShift. Enabling virtualization is a two-click operation and considered the gold standard for managing VMs and containers with a unified control plane. KubeVirt has been part of OKD and OpenShift since 2020.
241+
242+
Harvester screenshot
243+
244+
Harvester is an open source Hyper Converged Infrastructure (HCI) solution primarily focused on running a highly opinionated stack of software and tools on Kubernetes designed solely for running VMs. Harvester can be consumed by Rancher to allow Rancher to deploy and manage Kubernetes clusters on Harvester in a symbiotic relationship.
245+
246+
Walking through the UIs are out of scope for this blog post but the same outcomes can be accomplished in a few clicks similar to using the CLI with `virtctl` and `kubectl`.
247+
248+
# Ansible
249+
250+
Using CLIs and graphical UIs are great for exploratory administration and one-offs. They’re usually tedious and error prone when it comes to repeating the same set of tasks indefinitely. This is where Ansible comes it. Idempotent and declarative interfaces lend itself to distilling very complex tasks across multiple layers of infrastructure to gain full control all the way up to deploying the application. This kind of IT automation lends itself to GitOps and self-service patterns in large scale environments. Write once, delegate and reuse with ease, like cookie cutter templates.
251+
252+
Ansible has historically been well integrated with other KVM-based hypervisors such as oVirt/RHEV and provides VM management at scale quite elegantly.
253+
254+
Ansible can be installed on your desktop computer in a multitude of ways and will not be covered in this blog. Once Ansible is in place, install the KubeVirt collection:
255+
256+
```bash
257+
ansible-galaxy collection install kubevirt.core
258+
```
259+
260+
There are a couple of distinct patterns for managing cloud compute instances (VMs on KubeVirt in this case) with Ansible.
261+
262+
- Declaratively CRUD (Create, Read, Update Delete) the instances from a pre-rendered inventory, preferable templatized with Ansible, idempotent with desired parameters. Manage the OS and apps with playbooks using the rendered inventory.
263+
- Imperatively CRUD the instances with some other tooling, either from the cloud provider directly or idempotent with something like OpenTofu. Employ dynamic inventory plugins to manage the OS and apps inside the instances.
264+
- Imperatively CRUD the instances with Ansible playbooks and using a dynamic inventory plugin to manage OS and apps.
265+
266+
For the sake of simplicity and clarity the examples will imperatively CRUD the instances and showcase the dynamic inventory plugin with KubeVirt. In a production scenario where collaboration among engineers is required, the first option is the more elegant choice.
267+
268+
Create a playbook named “create_vm.yaml” or similar.
269+
270+
```yaml
271+
---
272+
- hosts: localhost
273+
connection: local
274+
tasks:
275+
- name: Ensure VM name
276+
assert:
277+
that: vm is defined
278+
- name: Create a VM
279+
kubevirt.core.kubevirt_vm:
280+
state: present
281+
name: "{{ vm }}"
282+
namespace: hpe-vmi
283+
labels:
284+
app: my-example-label
285+
instancetype:
286+
name: u1.medium
287+
preference:
288+
name: fedora
289+
data_volume_templates:
290+
- metadata:
291+
name: "{{ vm }}-0"
292+
spec:
293+
sourceRef:
294+
kind: DataSource
295+
name: fedora
296+
namespace: kubevirt-os-images
297+
storage:
298+
resources:
299+
requests:
300+
storage: 64Gi
301+
spec:
302+
domain:
303+
devices:
304+
interfaces:
305+
- name: mgmt
306+
bridge: {}
307+
networks:
308+
- name: mgmt
309+
multus:
310+
networkName: mgmt
311+
accessCredentials:
312+
- sshPublicKey:
313+
propagationMethod:
314+
qemuGuestAgent:
315+
users:
316+
- fedora
317+
source:
318+
secret:
319+
secretName: desktop
320+
volumes:
321+
- cloudInitConfigDrive:
322+
userData: |-
323+
#cloud-config
324+
# The default username is: fedora
325+
runcmd:
326+
- [ setsebool, -P, 'virt_qemu_ga_manage_ssh', 'on' ]
327+
name: cloudinitdisk
328+
- dataVolume:
329+
name: "{{ vm }}-0"
330+
name: "{{ vm }}-0"
331+
wait: yes
332+
```
333+
334+
Many attributes have been hardcoded in this example but it illustrates the similarities of what `virtctl` outputs based on the parameters provided.
335+
336+
Use the playbook to create a VM:
337+
338+
```bash
339+
ansible-playbook -e vm=my-vm-0 create_vm.yaml
340+
```
341+
342+
It takes a minute or so for the VM to come up. When the prompt comes back, create a file named “hosts.kubevirt.yaml” (the “kubevirt.yaml” part of the filename is mandatory):
343+
344+
```yaml
345+
plugin: kubevirt.core.kubevirt
346+
namespaces:
347+
- hpe-vmi
348+
host_format: "{name}"
349+
network_name: mgmt
350+
label_selector: app=my-example-label
351+
compose:
352+
ansible_user: "'fedora'"
353+
```
354+
355+
It’s now possible to use the KubeVirt inventory plugin to manage the OS and apps in the VM. Let’s see if it connects:
356+
357+
```bash
358+
ansible -i hosts.kubevirt.yaml -m ping my-vm-0
359+
my-vm-0 | SUCCESS => {
360+
"ansible_facts": {
361+
"discovered_interpreter_python": "/usr/bin/python3.13"
362+
},
363+
"changed": false,
364+
"ping": "pong"
365+
}
366+
```
367+
368+
At this point it’s possible to manage the VM like any other host provisioned on any kind of server, hypervisor or cloud platform.
369+
370+
# Summary
371+
372+
It doesn’t matter what your distinct VM management workflow looks like, KubeVirt serves all popular patterns. That said, current tools and processes will require an overhaul and why not switch to idempotent VM management through GitOps while transitioning from your legacy hypervisor? That's a topic for another day.
373+
374+
Connect with the HPE Developer Community via [Slack](https://developer.hpe.com/slack-signup/) or sign up for the [Munch & Learn Technology Talks](https://developer.hpe.com/campaign/munch-and-learn/) to immerse yourself in the latest breakthrough technologies from HPE, customers, and partners.

0 commit comments

Comments
 (0)