You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.
| serviceMonitor.endpointConfig | object |`{}`| Configuration on `http-metrics` endpoint for the ServiceMonitor. Not to be used to add additional endpoints. See the Prometheus operator documentation for configurable fields https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#endpoint|
91
91
| serviceMonitor.metricRelabelings | list |`[]`| Metric relabelings for the `http-metrics` endpoint on the ServiceMonitor. For more details on metric relabelings, see: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs|
92
92
| serviceMonitor.relabelings | list |`[]`| Relabelings for the `http-metrics` endpoint on the ServiceMonitor. For more details on relabelings, see: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config|
93
-
| serviceMonitor.sampleLimit |int|`nil`|Set a sampleLimit on the ServiceMonitor. By default, no limit is set. For more information, see: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file|
93
+
| serviceMonitor.sampleLimit |string|`nil`|Specifies the sampleLimit for prometheus scrapes. Per-scrape limit on the number of scraped samples that will be accepted. If more than this number of samples are present after metric relabeling the entire scrape will be treated as failed. 0 means no limit.|
94
94
| settings | object |`{"batchIdleDuration":"1s","batchMaxDuration":"10s","clusterCABundle":"","clusterEndpoint":"","clusterName":"","disableClusterStateObservability":false,"disableDryRun":false,"eksControlPlane":false,"featureGates":{"nodeOverlay":false,"nodeRepair":false,"reservedCapacity":true,"spotToSpotConsolidation":false,"staticCapacity":false},"ignoreDRARequests":true,"interruptionQueue":"","isolatedVPC":false,"minValuesPolicy":"Strict","preferencePolicy":"Respect","reservedENIs":"0","vmMemoryOverheadPercent":0.075}`| Global Settings to configure Karpenter |
95
95
| settings.batchIdleDuration | string |`"1s"`| The maximum amount of time with no new ending pods that if exceeded ends the current batching window. If pods arrive faster than this time, the batching window will be extended up to the maxDuration. If they arrive slower, the pods will be batched separately. |
96
96
| settings.batchMaxDuration | string |`"10s"`| The maximum length of a batch window. The longer this is, the more pods we can consider for provisioning at one time which usually results in fewer but larger nodes. |
| settings.disableDryRun | bool |`false`| Disable dry run validation for EC2NodeClasses. |
102
102
| settings.eksControlPlane | bool |`false`| Marking this true means that your cluster is running with an EKS control plane and Karpenter should attempt to discover cluster details from the DescribeCluster API. |
103
103
| settings.featureGates | object |`{"nodeOverlay":false,"nodeRepair":false,"reservedCapacity":true,"spotToSpotConsolidation":false,"staticCapacity":false}`| Feature Gate configuration values. Feature Gates will follow the same graduation process and requirements as feature gates in Kubernetes. More information here https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features.|
104
-
| settings.featureGates.nodeOverlay | bool |`false`| nodeOverlay is ALPHA and is disabled by default. Setting this will allow the use of node overlay to impact scheduling decisions |
104
+
| settings.featureGates.nodeOverlay | bool |`false`| nodeOverlay is ALPHA and is disabled by default. Setting this will allow the use of node overlay to impact scheduling decisions |
105
105
| settings.featureGates.nodeRepair | bool |`false`| nodeRepair is ALPHA and is disabled by default. Setting this to true will enable node repair. |
106
106
| settings.featureGates.reservedCapacity | bool |`true`| reservedCapacity is BETA and is enabled by default. Setting this will enable native on-demand capacity reservation support. |
107
107
| settings.featureGates.spotToSpotConsolidation | bool |`false`| spotToSpotConsolidation is ALPHA and is disabled by default. Setting this to true will enable spot replacement consolidation for both single and multi-node consolidation. |
Copy file name to clipboardExpand all lines: website/content/en/docs/concepts/disruption.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -173,11 +173,19 @@ Pod disruption budgets may be used to rate-limit application disruption.
173
173
174
174
### Expiration
175
175
176
-
A node is expired once it's lifetime exceeds the duration set on the owning NodeClaim's `spec.expireAfter` field.
176
+
Expiration is a forceful disruption method that begins draining a node immediately once its lifetime exceeds the duration set on the owning NodeClaim's `spec.expireAfter` field.
177
177
Changes to `spec.template.spec.expireAfter` on the owning NodePool will not update the field for existing NodeClaims - it will induce NodeClaim drift and the replacements will have the updated value.
178
178
Expiration can be used, in conjunction with [`terminationGracePeriod`](#terminationgraceperiod), to enforce a maximum Node lifetime.
179
179
By default, `expireAfter` is set to `720h` (30 days).
180
180
181
+
{{% alert title="Note" color="primary" %}}
182
+
The `expireAfter` field defines the **maximum** node lifetime (upper bound), not a guaranteed minimum.
183
+
Nodes can be disrupted earlier than the `expireAfter` duration by other disruption methods such as [Drift]({{<ref "#drift">}}), [Consolidation]({{<ref "#consolidation">}}), or [Emptiness]({{<ref "#consolidation">}}) if their [disruption budgets]({{<ref "#nodepool-disruption-budgets">}}) allow.
184
+
For example, a NodePool with `expireAfter: 720h` (30 days) can still have nodes terminated earlier if the node becomes drifted due to an AMI update and the disruption budget permits drift-based disruptions.
185
+
186
+
To enforce a true maximum node lifetime that cannot be shortened by other disruption methods, use `expireAfter`in combination with carefully configured disruption budgets that limit or prevent other disruption reasons.
187
+
{{% /alert %}}
188
+
181
189
{{% alert title="Warning" color="warning" %}}
182
190
Misconfigured PDBs and pods with the `karpenter.sh/do-not-disrupt` annotation may block draining indefinitely.
183
191
For this reason, it is not recommended to set`expireAfter` without also setting `terminationGracePeriod`**if** your cluster has pods with the `karpenter.sh/do-not-disrupt` annotation.
The `Custom` AMIFamily ships without any default userData to allow you to configure custom bootstrapping for control planes or images that don't support the default methods from the other families. For this AMIFamily, kubelet must add the taint `karpenter.sh/unregistered:NoExecute` via the `--register-with-taints` flag ([flags](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)) or the KubeletConfiguration spec ([options](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1/#kubelet-config-k8s-io-v1-CredentialProviderConfig) and [docs](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/)). Karpenter will fail to register nodes that do not have this taint.
@@ -744,6 +753,7 @@ An `alias` term can be used to select EKS-optimized AMIs. An `alias` is formatte
744
753
*`bottlerocket`
745
754
*`windows2019`
746
755
*`windows2022`
756
+
*`windows2025`
747
757
748
758
The version string can be set to `latest`, or pinned to a specific AMI using the format of that AMI's GitHub release tags.
749
759
For example, AL2 and AL2023 use dates for their release, so they can be pinned as follows:
@@ -1055,7 +1065,7 @@ spec:
1055
1065
encrypted: true
1056
1066
```
1057
1067
1058
-
### Windows2019/Windows2022
1068
+
### Windows2019/Windows2022/Windows2025
1059
1069
```yaml
1060
1070
spec:
1061
1071
blockDeviceMappings:
@@ -1492,7 +1502,7 @@ This allows the container to take ownership of devices allocated to the pod via
1492
1502
1493
1503
This setting helps you enable Neuron workloads on Bottlerocket instances. See [Accelerators/GPU Resources]({{< ref "./scheduling#acceleratorsgpu-resources" >}}) for more details.
1494
1504
1495
-
### Windows2019/Windows2022
1505
+
### Windows2019/Windows2022/Windows2025
1496
1506
1497
1507
* Your UserData must be specified as PowerShell commands.
1498
1508
* The UserData specified will be prepended to a Karpenter managed section that will bootstrap the kubelet.
0 commit comments