You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/virt-about-hostpath-provisioner.adoc
+20-21Lines changed: 20 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,24 +4,23 @@
4
4
5
5
:_content-type: CONCEPT
6
6
[id="virt-about-hostpath-provisioner_{context}"]
7
-
= About the hostpath provisioner
8
-
9
-
The hostpath provisioner is a local storage provisioner designed for
10
-
{VirtProductName}. If you want to configure local storage for
11
-
virtual machines, you must enable the hostpath provisioner first.
12
-
13
-
When you install the {VirtProductName} Operator, the hostpath provisioner Operator
14
-
is automatically installed. To use it, you must:
15
-
16
-
* Configure SELinux:
17
-
** If you use {op-system-first} 8 workers, you must create a `MachineConfig`
18
-
object on each node.
19
-
** Otherwise, apply the SELinux label `container_file_t` to the persistent volume (PV) backing
20
-
directory on each node.
21
-
* Create a `HostPathProvisioner` custom resource.
22
-
* Create a `StorageClass` object for the hostpath provisioner.
23
-
24
-
The hostpath provisioner Operator deploys the provisioner as a _DaemonSet_ on each
25
-
node when you create its custom resource. In the custom resource file, you specify
26
-
the backing directory for the persistent volumes that the hostpath provisioner
27
-
creates.
7
+
= About the hostpath provisioner (HPP)
8
+
9
+
When you install the {VirtProductName} Operator, the Hostpath Provisioner Operator is automatically installed. The HPP is a local storage provisioner designed for {VirtProductName} that is created by the Hostpath Provisioner Operator. To use the HPP, you must create a HPP custom resource.
10
+
11
+
[IMPORTANT]
12
+
====
13
+
In {VirtProductName} 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the custom resource.
14
+
15
+
The legacy HPP and the CSI host path driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy.
16
+
====
17
+
18
+
If you upgrade to {VirtProductName} version 4.10 on an existing cluster, the HPP Operator is upgraded and the system performs the following actions:
19
+
20
+
* The CSI driver is installed.
21
+
* The CSI driver is configured with the contents of your legacy custom resource.
22
+
23
+
If you install {VirtProductName} version 4.10 on a new cluster, you must perform the following actions:
24
+
25
+
* Create the HPP custom resource including a `storagePools` stanza in the HPP custom resource.
<1> The `storagePools` stanza is an array to which you can add multiple entries.
38
+
<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is on the same partition as the operating system, users can potentially fill the operating system partition and impact performance or cause the node to become unstable or unusable.
= Creating a storage pool using a pvcTemplate specification in a host path provisioner (HPP) custom resource.
8
+
9
+
If you have a single large persistent volume (PV) on your node, you might want to virtually divide the volume and use one partition to store only the HPP volumes. By defining a storage pool using a `pvcTemplate` specification in the HPP custom resource, you can virtually split the PV into multiple smaller volumes, providing more flexibility in data allocation.
10
+
11
+
The `pvcTemplate` matches the `spec` portion of a persistent volume claim (PVC). For example:
<1> A `pvcTemplate` is the `spec` (specification) section of a PVC
33
+
34
+
The Operator creates a PVC from the PVC template for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.
35
+
36
+
You can create any combination of storage pools. You can combine standard storage pools with storage pools that use PVC templates in the `storagePools` stanza.
37
+
38
+
.Procedure
39
+
40
+
. Create a YAML file for the CSI custom resource specifying a single `pvcTemplate` storage pool. For example:
<1> The `storagePools` stanza is an array to which you can add multiple entries.
73
+
<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is, users of the volumes can potentially fill the operating system partition and cause the node to impact performance, become unstable, or become unusable.
74
+
<3> `volumeMode` parameter is optional and can be either `Block` or `Filesystem` but must match the provisioned volume format, if used. The default value is `Filesystem`. If the `volumeMode` is `block`, the mounting pod creates an XFS file system on the block volume before mounting it.
75
+
<4> If the `storageClassName` parameter is omitted, the default storage class is used to create PVCs. If you omit `storageClassName`, ensure that the HPP storage class is not the default storage class.
76
+
<5> You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
Copy file name to clipboardExpand all lines: modules/virt-creating-storage-class.adoc
+59-16Lines changed: 59 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,18 +9,69 @@
9
9
When you create a storage class, you set parameters that affect the
10
10
dynamic provisioning of persistent volumes (PVs) that belong to that storage class.
11
11
12
-
[IMPORTANT]
13
-
====
14
-
When using {VirtProductName} with {product-title} Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
12
+
In order to use the host path provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza.
15
13
16
-
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and `VolumeMode: Block`.
14
+
[NOTE]
15
+
====
16
+
You cannot update a `StorageClass` object's parameters after you create it.
17
17
====
18
18
19
19
[NOTE]
20
20
====
21
-
You cannot update a `StorageClass` object's parameters after you create it.
21
+
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
22
+
23
+
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
22
24
====
23
25
26
+
[id="virt-creating-storage-class-csi_{context}"]
27
+
== Creating a storage class for the CSI driver with the storagePools stanza
28
+
29
+
Use this procedure to create a storage class for use with the HPP CSI driver implementation. You must create this storage class to use HPP in {VirtProductName} 4.10 and later.
30
+
31
+
.Procedure
32
+
33
+
. Create a YAML file for defining the storage class. For example:
34
+
+
35
+
[source,terminal]
36
+
----
37
+
$ touch <storageclass_csi>.yaml
38
+
----
39
+
40
+
. Edit the file. For example:
41
+
+
42
+
[source,yaml]
43
+
----
44
+
apiVersion: storage.k8s.io/v1
45
+
kind: StorageClass
46
+
metadata:
47
+
name: hostpath-csi <1>
48
+
provisioner: kubevirt.io.hostpath-provisioner <2>
49
+
reclaimPolicy: Delete <3>
50
+
volumeBindingMode: WaitForFirstConsumer <4>
51
+
parameters:
52
+
storagePool: <any_name> <5>
53
+
----
54
+
<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.
55
+
<2> The legacy provisioner uses `kubevirt.io/hostpath-provisioner`. The CSI driver uses `kubevirt.io.hostpath-provisioner`.
56
+
<3> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you
57
+
do not specify a value, the storage class defaults to `Delete`.
58
+
<4> The `volumeBindingMode` parameter determines when dynamic provisioning and volume binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements.
59
+
<5> `<any_name>` must match the name of the storage pool, which you define in the HPP custom resource.
<1> You can optionally rename the storage class by changing this value.
96
+
<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner, instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.
46
97
<2> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you
47
98
do not specify a value, the storage class defaults to `Delete`.
48
-
<3> The `volumeBindingMode` value determines when dynamic provisioning and volume
49
-
binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning
50
-
of a PV until after a pod that uses the persistent volume claim (PVC)
51
-
is created. This ensures that the PV meets the pod's scheduling requirements.
52
-
+
53
-
[NOTE]
54
-
====
55
-
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
99
+
<3> The `volumeBindingMode` value determines when dynamic provisioning and volume binding occur. Specify the `WaitForFirstConsumer` value to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements.
56
100
57
-
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a `Pod` is created using the PVC.
In addition to configuring a basic storage pool for use with the HPP, you have the option of creating single storage pools with the `pvcTemplate` specification as well as multiple storage pools.
* xref:../../../virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc#virt-customizing-storage-profile_virt-creating-data-volumes[Customizing the storage profile]
0 commit comments