Skip to content

Commit 11e2192

Browse files
authored
Highlight the influence of overhead-ratio on resourcequota (#903)
* Highlight the influence of overhead-ratio on resourcequota * Update per review comments Signed-off-by: Jian Wang <[email protected]>
1 parent b2725b6 commit 11e2192

File tree

6 files changed

+47
-28
lines changed

6 files changed

+47
-28
lines changed

docs/advanced/settings.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
458458

459459
- When a VM has a user configured `Reserved Memory`, this is always kept.
460460

461-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
461+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
462+
463+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
462464

463465
:::
464466

docs/rancher/resource-quota.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ In Harvester, ResourceQuota can define usage limits for the following resources:
2323
- **Storage:** Limits the usage of storage resources.
2424

2525
## Set ResourceQuota via Rancher
26+
2627
In the Rancher UI, administrators can configure resource quotas for namespaces through the following steps:
2728

2829
1. Click the hamburger menu and choose the **Virtualization Management** tab.
@@ -31,12 +32,11 @@ In the Rancher UI, administrators can configure resource quotas for namespaces t
3132
![](/img/v1.4/rancher/create-project.png)
3233

3334
:::note
34-
The "VM Default Resource Limit" is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource "reservation" and "limit" values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35+
The `VM Default Resource Limit` is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource `reservation` and `limit` values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
3536

36-
These configuration will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37+
These configurations will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
3738
:::
3839

39-
4040
You can configure the **Namespace** limits as follows:
4141

4242
1. Find the newly created project, and select **Create Namespace**.
@@ -50,11 +50,14 @@ Attempts to provision VMs for guest clusters are blocked when the resource quota
5050

5151
:::important
5252

53-
Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory.
53+
- Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory. The [link](../advanced/settings.md#additional-guest-memory-overhead-ratio) includes a table to show how the final memory of a VM is calculated.
54+
55+
- When you plan to change the Harvester setting [additional-guest-memory-overhead-ratio](../advanced/settings.md#additional-guest-memory-overhead-ratio) to a bigger value, remember to review the `ResourceQuota` values and update them accordingly. You need to tune these two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs which will have the new amount of overhead memory.
5456

5557
:::
5658

5759
## Overhead Memory of Virtual Machine
60+
5861
Upon creating a virtual machine (VM), the VM controller seamlessly incorporates overhead resources into the VM's configuration. These additional resources intend to guarantee the consistent and uninterrupted functioning of the VM. It's important to note that configuring memory limits requires a higher memory reservation due to the inclusion of these overhead resources.
5962

6063
For example, consider the creation of a new VM with the following configuration:
@@ -90,6 +93,7 @@ The `Overhead Memory` varies between different Harvester releases (with differen
9093
:::
9194

9295
## Automatic adjustment of ResourceQuota during migration
96+
9397
When the allocated resource quota controlled by the `ResourceQuota` object reaches its limit, migrating a VM becomes unfeasible. The migration process automatically creates a new pod mirroring the resource requirements of the source VM. If these pod creation prerequisites surpass the defined quota, the migration operation cannot proceed.
9498

9599
_Available as of v1.2.0_

versioned_docs/version-v1.5/advanced/settings.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ For more information, see the **Certificate Rotation** section of the [Rancher](
7878

7979
### `backup-target`
8080

81-
**Definition**: Custom backup target used to store VM backups.
81+
**Definition**: Custom backup target used to store VM backups.
8282

8383
For more information, see the [Longhorn documentation](https://longhorn.io/docs/1.6.0/snapshots-and-backups/backup-and-restore/set-backup-target/#set-up-aws-s3-backupstore).
8484

@@ -122,7 +122,7 @@ https://172.16.0.1/v3/import/w6tp7dgwjj549l88pr7xmxb4x6m54v5kcplvhbp9vv2wzqrrjhr
122122

123123
### `containerd-registry`
124124

125-
**Definition**: Configuration of a private registry created for the Harvester cluster.
125+
**Definition**: Configuration of a private registry created for the Harvester cluster.
126126

127127
The value is stored in the `registries.yaml` file of each node (path: `/etc/rancher/rke2/registries.yaml`). For more information, see [Containerd Registry Configuration](https://docs.rke2.io/install/private_registry) in the RKE2 documentation.
128128

@@ -205,7 +205,7 @@ Changing this setting might cause single-node clusters to temporarily become una
205205
- Proxy URL for HTTPS requests: `"httpsProxy": "https://<username>:<pswd>@<ip>:<port>"`
206206
- Comma-separated list of hostnames and/or CIDRs: `"noProxy": "<hostname | CIDR>"`
207207

208-
You must specify key information in the `noProxy` field if you configured the following options or settings:
208+
You must specify key information in the `noProxy` field if you configured the following options or settings:
209209

210210
| Configured option/setting | Required value in `noProxy` | Reason |
211211
| --- | --- | --- |
@@ -252,7 +252,7 @@ debug
252252

253253
**Definition**: Setting that enables and disables the Longhorn V2 Data Engine.
254254

255-
When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.
255+
When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.
256256

257257
Changing this setting automatically restarts RKE2 on all nodes but does not affect running virtual machine workloads.
258258

@@ -261,7 +261,7 @@ Changing this setting automatically restarts RKE2 on all nodes but does not affe
261261
If you encounter error messages that include the phrase "not enough hugepages-2Mi capacity", allow some time for the error to be resolved. If the error persists, reboot the affected nodes.
262262

263263
To disable the Longhorn V2 Data Engine on specific nodes (for example, nodes with less processing and memory resources), go to the **Hosts** screen and add the following label to the target nodes:
264-
264+
265265
- label: `node.longhorn.io/disable-v2-data-engine`
266266
- value: `true`
267267

@@ -306,7 +306,7 @@ Changes to the server address list are applied to all nodes.
306306

307307
**Definition**: Percentage of physical compute, memory, and storage resources that can be allocated for VM use.
308308

309-
Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.
309+
Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.
310310

311311
**Default values**: `{ "cpu":1600, "memory":150, "storage":200 }`
312312

@@ -444,7 +444,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
444444

445445
- When a VM has a user configured `Reserved Memory`, this is always kept.
446446

447-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
447+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
448+
449+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
448450

449451
:::
450452

@@ -515,7 +517,7 @@ If you misconfigure this setting and are unable to access the Harvester UI and A
515517

516518
**Supported options and values**:
517519

518-
- `protocols`: Enabled protocols.
520+
- `protocols`: Enabled protocols.
519521
- `ciphers`: Enabled ciphers.
520522

521523
For more information about the supported options, see [`ssl-protocols`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-protocols) and [`ssl-ciphers`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers) in the Ingress-Nginx Controller documentation.
@@ -686,7 +688,7 @@ When the cluster is upgraded in the future, the contents of the `value` field ma
686688

687689
**Versions**: v1.2.0 and later
688690

689-
**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).
691+
**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).
690692

691693
By default, the support bundle only collects resources from the following predefined namespaces:
692694

@@ -729,7 +731,7 @@ You can specify a value greater than or equal to 0. When the value is 0, Harvest
729731

730732
**Versions**: v1.3.1 and later
731733

732-
**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.
734+
**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.
733735

734736
If the collection process is not completed within the allotted time, Harvester still allows you to download the support bundle (without the uncollected data). You can specify a value greater than or equal to 0. When the value is 0, Harvester uses the default value.
735737

@@ -770,7 +772,7 @@ https://your.upgrade.checker-url/v99/checkupgrade
770772
**Supported options and fields**:
771773

772774
- `imagePreloadOption`: Options for the image preloading phase.
773-
775+
774776
The full ISO contains the core operating system components and all required container images. Harvester can preload these container images to each node during installation and upgrades. When workloads are scheduled to management and worker nodes, the container images are ready to use.
775777

776778
- `strategy`: Image preload strategy.
@@ -786,10 +788,10 @@ https://your.upgrade.checker-url/v99/checkupgrade
786788
If you decide to use `skip`, ensure that the following requirements are met:
787789

788790
- You have a private container registry that contains all required images.
789-
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.
790-
791+
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.
792+
791793
Note any potential internet service interruptions and how close you are to reaching your [Docker Hub rate limit](https://www.docker.com/increase-rate-limits/). Failure to download any of the required images may cause the upgrade to fail and may leave the cluster in a middle state.
792-
794+
793795
:::
794796

795797
- `parallel` (**experimental**): Nodes preload images in batches. You can adjust this using the `concurrency` option.
@@ -839,7 +841,7 @@ https://your.upgrade.checker-url/v99/checkupgrade
839841

840842
### `vm-force-reset-policy`
841843

842-
**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.
844+
**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.
843845

844846
When the state of the node changes to `Not Ready`, the VM is force deleted and rescheduled to an available node after the configured number of seconds.
845847

versioned_docs/version-v1.5/rancher/resource-quota.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ In Harvester, ResourceQuota can define usage limits for the following resources:
2323
- **Storage:** Limits the usage of storage resources.
2424

2525
## Set ResourceQuota via Rancher
26+
2627
In the Rancher UI, administrators can configure resource quotas for namespaces through the following steps:
2728

2829
1. Click the hamburger menu and choose the **Virtualization Management** tab.
@@ -31,9 +32,9 @@ In the Rancher UI, administrators can configure resource quotas for namespaces t
3132
![](/img/v1.4/rancher/create-project.png)
3233

3334
:::note
34-
The "VM Default Resource Limit" is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource "reservation" and "limit" values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
35+
The `VM Default Resource Limit` is used to set default request/limit on compute resources for pods running within the namespace, using the Kubernetes [`LimitRange` API](https://kubernetes.io/docs/concepts/policy/limit-range/). The resource `reservation` and `limit` values correspond to the `defaultRequest` and `default` limits of the namespace's `LimitRange` configuration. These settings are applied to pod workloads only.
3536

36-
These configuration will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
37+
These configurations will be removed in the future. See issue https://github.com/harvester/harvester/issues/5652.
3738
:::
3839

3940
You can configure the **Namespace** limits as follows:
@@ -49,11 +50,14 @@ Attempts to provision VMs for guest clusters are blocked when the resource quota
4950

5051
:::important
5152

52-
Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory.
53+
- Due to the [Overhead Memory of Virtual Machine](#overhead-memory-of-virtual-machine), each VM needs some additional memory to work. When setting **Memory Limit**, this should be taken into account. For example, when the project **Memory Limit** is `24 Gi`, it is not possible to run 3 VMs each has `8 Gi` memory. The [link](../advanced/settings.md#additional-guest-memory-overhead-ratio) includes a table to show how the final memory of a VM is calculated.
54+
55+
- When you plan to change the Harvester setting [additional-guest-memory-overhead-ratio](../advanced/settings.md#additional-guest-memory-overhead-ratio) to a bigger value, remember to review the `ResourceQuota` values and update them accordingly. You need to tune these two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs which will have the new amount of overhead memory.
5356

5457
:::
5558

5659
## Overhead Memory of Virtual Machine
60+
5761
Upon creating a virtual machine (VM), the VM controller seamlessly incorporates overhead resources into the VM's configuration. These additional resources intend to guarantee the consistent and uninterrupted functioning of the VM. It's important to note that configuring memory limits requires a higher memory reservation due to the inclusion of these overhead resources.
5862

5963
For example, consider the creation of a new VM with the following configuration:
@@ -89,6 +93,7 @@ The `Overhead Memory` varies between different Harvester releases (with differen
8993
:::
9094

9195
## Automatic adjustment of ResourceQuota during migration
96+
9297
When the allocated resource quota controlled by the `ResourceQuota` object reaches its limit, migrating a VM becomes unfeasible. The migration process automatically creates a new pod mirroring the resource requirements of the source VM. If these pod creation prerequisites surpass the defined quota, the migration operation cannot proceed.
9398

9499
_Available as of v1.2.0_

versioned_docs/version-v1.6/advanced/settings.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,9 @@ Changing the `additional-guest-memory-overhead-ratio` setting affects the VMs pe
458458

459459
- When a VM has a user configured `Reserved Memory`, this is always kept.
460460

461-
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs which have the `100Mi default Reserved Memory` will keep it, the existing VMs which do not have `100Mi default Reserved Memory` will not get it automatically.
461+
- When the value changes between `"0"` and the range `["", "1.0" .. "10.0"]`, the existing VMs that have the `100Mi default Reserved Memory` will keep it, and the existing VMs that do not have `100Mi default Reserved Memory` will not get it automatically.
462+
463+
- When [ResourceQuota](../rancher/resource-quota.md#set-resourcequota-via-rancher) is configured on namespaces, the new ratio is used when VMs are migrated or started. You need to tune those two parameters to ensure the `ResourceQuota` can accommodate the original number of VMs, which will have the new amount of overhead memory.
462464

463465
:::
464466

0 commit comments

Comments
 (0)