You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{product-title} enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plug-in for managing LVM volumes for Kubernetes.
9
10
11
+
LVMS provisions new logical volume management (LVM) logical volumes (LVs) for container workloads with appropriately configured persistent volume claims (PVC). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.
12
+
13
+
[id="lvms-deployment"]
14
+
== LVMS Deployment
15
+
16
+
LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.
17
+
18
+
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, see link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
19
+
20
+
[id="lvms-configuring"]
21
+
== Configuring the LVMS
22
+
23
+
{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.
24
+
25
+
The following `config.yaml` file shows a basic LVMS configuration:
26
+
27
+
.LVMS YAML configuration
28
+
[source,yaml]
29
+
----
30
+
socket-name: <1>
31
+
device-classes: <2>
32
+
- name: <3>
33
+
volume-group: <4>
34
+
spare-gb: <5>
35
+
default: <6>
36
+
- name: hdd
37
+
volume-group: hdd-vg
38
+
spare-gb: 10
39
+
- name: striped
40
+
volume-group: multi-pv-vg
41
+
spare-gb: 10
42
+
stripe: <7>
43
+
stripe-size: <8>
44
+
- name: raid
45
+
volume-group: raid-vg
46
+
lvcreate-options: <9>
47
+
- --type=raid1
48
+
----
49
+
<1> String. The UNIX domain socket endpoint of gRPC. Defaults to `/run/topolvm/lvmd.sock`.
50
+
<2> `map[string]DeviceClass`. The `device-class` settings.
51
+
<3> String. The name of the `device-class`.
52
+
<4> String. The group where the `device-class` creates the logical volumes.
53
+
<5> unit64. Storage capacity in GiB to be spared. Defaults to `10`.
54
+
<6> Boolean. Indicates that the `device-class` is used by default. Defaults to `false`.
55
+
<7> unit. The number of stripes in the logical volume.
56
+
<8> String. The amount of data that is written to one device before moving to the next device.
57
+
<9> String. Extra arguments to pas `lvcreate`, for example, `[--type=raid1"`].
58
+
+
59
+
[NOTE]
60
+
====
61
+
Striping can be configured by using the dedicated options (`stripe` and `stripe-size`) and `lvcreate-options`. Either option can be used, but they cannot be used together. Using `stripe` and `stripe-size` with `lvcreate-options` leads to duplicate arguments to `lvcreate`. You should never set `lvcreate-options: ["--stripes=n"]` and `stripe: n` at the same time. You can, however, use both, when `lvcreate-options` is not used for striping. For example:
62
+
63
+
[source,yaml]
64
+
----
65
+
stripe: 2
66
+
lvcreate-options: ["--mirrors=1"]
67
+
----
68
+
====
69
+
70
+
[id="setting-lvms-path"]
71
+
=== Setting the LVMS path
72
+
73
+
The `config.yaml` file for the LMVS should be written to the same directory as the MicroShift `config.yaml` file. If a MicroShift `config.yaml` file does not exist, MicroShift will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs MicroShift:
{product-title}'s LVMS requires the following system specifications.
86
+
87
+
[id="lvms-volume-group-name"]
88
+
=== Volume Group Name
89
+
90
+
The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.
91
+
92
+
[id="lvms-volume-size-increments"]
93
+
=== Volume size increments
94
+
95
+
The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
96
+
97
+
[source,terminal]
98
+
----
99
+
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
100
+
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
101
+
----
102
+
103
+
[id=using-lvms]
104
+
== Using the LVMS
105
+
106
+
The LVMS `StorageClass` is deployed with a default `StorageClass`. Any `PersistentVolumeClaim` objects without a `.spec.storageClassName` defined automatically has a `PersistentVolume` provisioned from the default `StorageClass`.
107
+
108
+
Use the following procedure to provision and mount a logical volume to a pod.
109
+
110
+
.Procedure
111
+
112
+
The following example demonstrates how to provision and mount a logical volume to a pod.
0 commit comments