Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module github.com/aws/karpenter-provider-aws

go 1.26.1
go 1.26.2

// TODO: migrate tablewriter to v1.0.8
// https://github.com/olekukonko/tablewriter/blob/c64d84b3ecc64a18cfc8ba10cdd8c52cc13a7d23/MIGRATION.md?plain=1#L661
Expand Down
5 changes: 2 additions & 3 deletions website/content/en/docs/concepts/nodepools.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,7 @@ spec:
limits:
cpu: "1000"
memory: 1000Gi
# For static NodePools, limits.nodes constrains maximum node count during scaling/drift
# Note : Supported only for static NodePools
# limits.nodes constrains maximum node count during scaling/drift
nodes: 10

# Priority given to the NodePool when the scheduler considers which NodePool
Expand Down Expand Up @@ -418,7 +417,7 @@ The NodePool spec includes a limits section (`spec.limits`), which constrains th

If the `NodePool.spec.limits` section is unspecified, it means that there is no default limitation on resource allocation. In this case, the maximum resource consumption is governed by the quotas set by your cloud provider. If a limit has been exceeded, nodes provisioning is prevented until some nodes have been terminated.

**For Static NodePools:** Only `limits.nodes` is supported. This field constrains the maximum number of nodes during scaling operations or drift replacement. Note that `limits.nodes` is support only on static NodePools.
The `limits.nodes` field constrains the maximum number of nodes during scaling operations or drift replacement.

```yaml
apiVersion: karpenter.sh/v1
Expand Down
5 changes: 2 additions & 3 deletions website/content/en/preview/concepts/nodepools.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,7 @@ spec:
limits:
cpu: "1000"
memory: 1000Gi
# For static NodePools, limits.nodes constrains maximum node count during scaling/drift
# Note : Supported only for static NodePools
# limits.nodes constrains maximum node count during scaling/drift
nodes: 10

# Priority given to the NodePool when the scheduler considers which NodePool
Expand Down Expand Up @@ -418,7 +417,7 @@ The NodePool spec includes a limits section (`spec.limits`), which constrains th

If the `NodePool.spec.limits` section is unspecified, it means that there is no default limitation on resource allocation. In this case, the maximum resource consumption is governed by the quotas set by your cloud provider. If a limit has been exceeded, nodes provisioning is prevented until some nodes have been terminated.

**For Static NodePools:** Only `limits.nodes` is supported. This field constrains the maximum number of nodes during scaling operations or drift replacement. Note that `limits.nodes` is support only on static NodePools.
The `limits.nodes` field constrains the maximum number of nodes during scaling operations or drift replacement.

```yaml
apiVersion: karpenter.sh/v1
Expand Down
5 changes: 2 additions & 3 deletions website/content/en/v1.11/concepts/nodepools.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,7 @@ spec:
limits:
cpu: "1000"
memory: 1000Gi
# For static NodePools, limits.nodes constrains maximum node count during scaling/drift
# Note : Supported only for static NodePools
# limits.nodes constrains maximum node count during scaling/drift
nodes: 10

# Priority given to the NodePool when the scheduler considers which NodePool
Expand Down Expand Up @@ -418,7 +417,7 @@ The NodePool spec includes a limits section (`spec.limits`), which constrains th

If the `NodePool.spec.limits` section is unspecified, it means that there is no default limitation on resource allocation. In this case, the maximum resource consumption is governed by the quotas set by your cloud provider. If a limit has been exceeded, nodes provisioning is prevented until some nodes have been terminated.

**For Static NodePools:** Only `limits.nodes` is supported. This field constrains the maximum number of nodes during scaling operations or drift replacement. Note that `limits.nodes` is support only on static NodePools.
The `limits.nodes` field constrains the maximum number of nodes during scaling operations or drift replacement.

```yaml
apiVersion: karpenter.sh/v1
Expand Down
Loading