You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/advanced/settings.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -458,7 +458,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
458
458
459
459
- When a VM has a user configured `Reserved Memory`, this is always kept.
460
460
461
-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
461
+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
462
+
463
+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
Copy file name to clipboardExpand all lines: docs/rancher/resource-quota.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,7 @@ In Harvester, ResourceQuota can define usage limits for the following resources:
23
23
-**Storage:** Limits the usage of storage resources.
24
24
25
25
## Set ResourceQuota via Rancher
26
+
26
27
In the Rancher UI, administrators can configure resource quotas for namespaces through the following steps:
27
28
28
29
1. Click the hamburger menu and choose the **Virtualization Management** tab.
@@ -31,12 +32,11 @@ In the Rancher UI, administrators can configure resource quotas for namespaces t
31
32

32
33
33
34
:::note
34
-
The "VM Default Resource Limit" is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource "reservation" and "limit" values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35
+
The `VM Default Resource Limit` is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource `reservation` and `limit` values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35
36
36
-
These configuration will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37
+
These configurations will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37
38
:::
38
39
39
-
40
40
You can configure the **Namespace** limits as follows:
41
41
42
42
1. Find the newly created project, and select **Create Namespace**.
@@ -50,11 +50,14 @@ Attempts to provision VMs for guest clusters are blocked when the resource quota
50
50
51
51
:::important
52
52
53
-
Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory.
53
+
- Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory. The [link](../advanced/settings.md#additional-guest-memory-overhead-ratio) includes a table to show how the final memory of a VM is calculated.
54
+
55
+
- When you plan to change the Harvester setting [additional-guest-memory-overhead-ratio](../advanced/settings.md#additional-guest-memory-overhead-ratio) to a bigger value, remember to review the `ResourceQuota` values and update them accordingly. You need to tune these two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs which will have the new amount of overhead memory.
54
56
55
57
:::
56
58
57
59
## Overhead Memory of Virtual Machine
60
+
58
61
Upon creating a virtual machine (VM), the VM controller seamlessly incorporates overhead resources into the VM's configuration. These additional resources intend to guarantee the consistent and uninterrupted functioning of the VM. It's important to note that configuring memory limits requires a higher memory reservation due to the inclusion of these overhead resources.
59
62
60
63
For example, consider the creation of a new VM with the following configuration:
@@ -90,6 +93,7 @@ The `Overhead Memory` varies between different Harvester releases (with differen
90
93
:::
91
94
92
95
## Automatic adjustment of ResourceQuota during migration
96
+
93
97
When the allocated resource quota controlled by the `ResourceQuota` object reaches its limit, migrating a VM becomes unfeasible. The migration process automatically creates a new pod mirroring the resource requirements of the source VM. If these pod creation prerequisites surpass the defined quota, the migration operation cannot proceed.
Copy file name to clipboardExpand all lines: versioned_docs/version-v1.5/advanced/settings.md
+17-15Lines changed: 17 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ For more information, see the **Certificate Rotation** section of the [Rancher](
78
78
79
79
### `backup-target`
80
80
81
-
**Definition**: Custom backup target used to store VM backups.
81
+
**Definition**: Custom backup target used to store VM backups.
82
82
83
83
For more information, see the [Longhorn documentation](https://longhorn.io/docs/1.6.0/snapshots-and-backups/backup-and-restore/set-backup-target/#set-up-aws-s3-backupstore).
**Definition**: Configuration of a private registry created for the Harvester cluster.
125
+
**Definition**: Configuration of a private registry created for the Harvester cluster.
126
126
127
127
The value is stored in the `registries.yaml` file of each node (path: `/etc/rancher/rke2/registries.yaml`). For more information, see [Containerd Registry Configuration](https://docs.rke2.io/install/private_registry) in the RKE2 documentation.
128
128
@@ -205,7 +205,7 @@ Changing this setting might cause single-node clusters to temporarily become una
205
205
- Proxy URL for HTTPS requests: `"httpsProxy": "https://<username>:<pswd>@<ip>:<port>"`
206
206
- Comma-separated list of hostnames and/or CIDRs: `"noProxy": "<hostname | CIDR>"`
207
207
208
-
You must specify key information in the `noProxy` field if you configured the following options or settings:
208
+
You must specify key information in the `noProxy` field if you configured the following options or settings:
209
209
210
210
| Configured option/setting | Required value in `noProxy`| Reason |
211
211
| --- | --- | --- |
@@ -252,7 +252,7 @@ debug
252
252
253
253
**Definition**: Setting that enables and disables the Longhorn V2 Data Engine.
254
254
255
-
When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.
255
+
When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.
256
256
257
257
Changing this setting automatically restarts RKE2 on all nodes but does not affect running virtual machine workloads.
258
258
@@ -261,7 +261,7 @@ Changing this setting automatically restarts RKE2 on all nodes but does not affe
261
261
If you encounter error messages that include the phrase "not enough hugepages-2Mi capacity", allow some time for the error to be resolved. If the error persists, reboot the affected nodes.
262
262
263
263
To disable the Longhorn V2 Data Engine on specific nodes (for example, nodes with less processing and memory resources), go to the **Hosts** screen and add the following label to the target nodes:
@@ -306,7 +306,7 @@ Changes to the server address list are applied to all nodes.
306
306
307
307
**Definition**: Percentage of physical compute, memory, and storage resources that can be allocated for VM use.
308
308
309
-
Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.
309
+
Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.
@@ -444,7 +444,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
444
444
445
445
- When a VM has a user configured `Reserved Memory`, this is always kept.
446
446
447
-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
447
+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
448
+
449
+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
448
450
449
451
:::
450
452
@@ -515,7 +517,7 @@ If you misconfigure this setting and are unable to access the Harvester UI and A
515
517
516
518
**Supported options and values**:
517
519
518
-
-`protocols`: Enabled protocols.
520
+
-`protocols`: Enabled protocols.
519
521
-`ciphers`: Enabled ciphers.
520
522
521
523
For more information about the supported options, see [`ssl-protocols`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-protocols) and [`ssl-ciphers`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers) in the Ingress-Nginx Controller documentation.
@@ -686,7 +688,7 @@ When the cluster is upgraded in the future, the contents of the `value` field ma
686
688
687
689
**Versions**: v1.2.0 and later
688
690
689
-
**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).
691
+
**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).
690
692
691
693
By default, the support bundle only collects resources from the following predefined namespaces:
692
694
@@ -729,7 +731,7 @@ You can specify a value greater than or equal to 0. When the value is 0, Harvest
729
731
730
732
**Versions**: v1.3.1 and later
731
733
732
-
**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.
734
+
**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.
733
735
734
736
If the collection process is not completed within the allotted time, Harvester still allows you to download the support bundle (without the uncollected data). You can specify a value greater than or equal to 0. When the value is 0, Harvester uses the default value.
-`imagePreloadOption`: Options for the image preloading phase.
773
-
775
+
774
776
The full ISO contains the core operating system components and all required container images. Harvester can preload these container images to each node during installation and upgrades. When workloads are scheduled to management and worker nodes, the container images are ready to use.
If you decide to use `skip`, ensure that the following requirements are met:
787
789
788
790
- You have a private container registry that contains all required images.
789
-
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.
790
-
791
+
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.
792
+
791
793
Note any potential internet service interruptions and how close you are to reaching your [Docker Hub rate limit](https://www.docker.com/increase-rate-limits/). Failure to download any of the required images may cause the upgrade to fail and may leave the cluster in a middle state.
792
-
794
+
793
795
:::
794
796
795
797
-`parallel` (**experimental**): Nodes preload images in batches. You can adjust this using the `concurrency` option.
**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.
844
+
**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.
843
845
844
846
When the state of the node changes to `Not Ready`, the VM is force deleted and rescheduled to an available node after the configured number of seconds.
Copy file name to clipboardExpand all lines: versioned_docs/version-v1.5/rancher/resource-quota.md
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,7 @@ In Harvester, ResourceQuota can define usage limits for the following resources:
23
23
-**Storage:** Limits the usage of storage resources.
24
24
25
25
## Set ResourceQuota via Rancher
26
+
26
27
In the Rancher UI, administrators can configure resource quotas for namespaces through the following steps:
27
28
28
29
1. Click the hamburger menu and choose the **Virtualization Management** tab.
@@ -31,9 +32,9 @@ In the Rancher UI, administrators can configure resource quotas for namespaces t
31
32

32
33
33
34
:::note
34
-
The "VM Default Resource Limit" is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource "reservation" and "limit" values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35
+
The `VM Default Resource Limit` is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource `reservation` and `limit` values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35
36
36
-
These configuration will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37
+
These configurations will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37
38
:::
38
39
39
40
You can configure the **Namespace** limits as follows:
@@ -49,11 +50,14 @@ Attempts to provision VMs for guest clusters are blocked when the resource quota
49
50
50
51
:::important
51
52
52
-
Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory.
53
+
- Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory. The [link](../advanced/settings.md#additional-guest-memory-overhead-ratio) includes a table to show how the final memory of a VM is calculated.
54
+
55
+
- When you plan to change the Harvester setting [additional-guest-memory-overhead-ratio](../advanced/settings.md#additional-guest-memory-overhead-ratio) to a bigger value, remember to review the `ResourceQuota` values and update them accordingly. You need to tune these two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs which will have the new amount of overhead memory.
53
56
54
57
:::
55
58
56
59
## Overhead Memory of Virtual Machine
60
+
57
61
Upon creating a virtual machine (VM), the VM controller seamlessly incorporates overhead resources into the VM's configuration. These additional resources intend to guarantee the consistent and uninterrupted functioning of the VM. It's important to note that configuring memory limits requires a higher memory reservation due to the inclusion of these overhead resources.
58
62
59
63
For example, consider the creation of a new VM with the following configuration:
@@ -89,6 +93,7 @@ The `Overhead Memory` varies between different Harvester releases (with differen
89
93
:::
90
94
91
95
## Automatic adjustment of ResourceQuota during migration
96
+
92
97
When the allocated resource quota controlled by the `ResourceQuota` object reaches its limit, migrating a VM becomes unfeasible. The migration process automatically creates a new pod mirroring the resource requirements of the source VM. If these pod creation prerequisites surpass the defined quota, the migration operation cannot proceed.
Copy file name to clipboardExpand all lines: versioned_docs/version-v1.6/advanced/settings.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -458,7 +458,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
458
458
459
459
- When a VM has a user configured `Reserved Memory`, this is always kept.
460
460
461
-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
461
+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
462
+
463
+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
0 commit comments