Skip to content

Commit 5686385

Browse files
authored
Merge pull request #45675 from Princesso/merged-main-dev-1.30
Merge main branch into dev-1.30
2 parents 004553e + 0568c8a commit 5686385

File tree

35 files changed

+458
-282
lines changed

35 files changed

+458
-282
lines changed

content/en/blog/_posts/2024-03-19-go-workspaces.md

Lines changed: 0 additions & 210 deletions
This file was deleted.

content/en/docs/concepts/architecture/nodes.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -607,6 +607,8 @@ Learn more about the following:
607607
* [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
608608
* [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
609609
section of the architecture design document.
610+
* [Cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) to
611+
manage the number and size of nodes in your cluster.
610612
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
611613
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
612614
* [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).

content/en/docs/concepts/cluster-administration/_index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ Before choosing a guide, here are some considerations:
5252
## Managing a cluster
5353

5454
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
55+
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/).
5556

5657
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
5758

content/en/docs/concepts/cluster-administration/addons.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Installing Addons
33
content_type: concept
4-
weight: 120
4+
weight: 150
55
---
66

77
<!-- overview -->
@@ -109,6 +109,10 @@ installation instructions. The list does not try to be exhaustive.
109109
[Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) or
110110
[Node conditions](/docs/concepts/architecture/nodes/#condition).
111111

112+
## Instrumentation
113+
114+
* [kube-state-metrics](/docs/concepts/cluster-administration/kube-state-metrics)
115+
112116
## Legacy Add-ons
113117

114118
There are several other add-ons documented in the deprecated
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
title: Cluster Autoscaling
3+
linkTitle: Cluster Autoscaling
4+
description: >-
5+
Automatically manage the nodes in your cluster to adapt to demand.
6+
content_type: concept
7+
weight: 120
8+
---
9+
10+
<!-- overview -->
11+
12+
Kubernetes requires {{< glossary_tooltip text="nodes" term_id="node" >}} in your cluster to
13+
run {{< glossary_tooltip text="pods" term_id="pod" >}}. This means providing capacity for
14+
the workload Pods and for Kubernetes itself.
15+
16+
You can adjust the amount of resources available in your cluster automatically:
17+
_node autoscaling_. You can either change the number of nodes, or change the capacity
18+
that nodes provide. The first approach is referred to as _horizontal scaling_, while the
19+
second is referred to as _vertical scaling_.
20+
21+
Kubernetes can even provide multidimensional automatic scaling for nodes.
22+
23+
<!-- body -->
24+
25+
## Manual node management
26+
27+
You can manually manage node-level capacity, where you configure a fixed amount of nodes;
28+
you can use this approach even if the provisioning (the process to set up, manage, and
29+
decommission) for these nodes is automated.
30+
31+
This page is about taking the next step, and automating management of the amount of
32+
node capacity (CPU, memory, and other node resources) available in your cluster.
33+
34+
## Automatic horizontal scaling {#autoscaling-horizontal}
35+
36+
### Cluster Autoscaler
37+
38+
You can use the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to manage the scale of your nodes automatically.
39+
The cluster autoscaler can integrate with a cloud provider, or with Kubernetes'
40+
[cluster API](https://github.com/kubernetes/autoscaler/blob/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler/cloudprovider/clusterapi/README.md),
41+
to achieve the actual node management that's needed.
42+
43+
The cluster autoscaler adds nodes when there are unschedulable Pods, and
44+
removes nodes when those nodes are empty.
45+
46+
#### Cloud provider integrations {#cluster-autoscaler-providers}
47+
48+
The [README](https://github.com/kubernetes/autoscaler/tree/c6b754c359a8563050933a590f9a5dece823c836/cluster-autoscaler#readme)
49+
for the cluster autoscaler lists some of the cloud provider integrations
50+
that are available.
51+
52+
## Cost-aware multidimensional scaling {#autoscaling-multi-dimension}
53+
54+
### Karpenter {#autoscaler-karpenter}
55+
56+
[Karpenter](https://karpenter.sh/) supports direct node management, via
57+
plugins that integrate with specific cloud providers, and can manage nodes
58+
for you whilst optimizing for overall cost.
59+
60+
> Karpenter automatically launches just the right compute resources to
61+
> handle your cluster's applications. It is designed to let you take
62+
> full advantage of the cloud with fast and simple compute provisioning
63+
> for Kubernetes clusters.
64+
65+
The Karpenter tool is designed to integrate with a cloud provider that
66+
provides API-driven server management, and where the price information for
67+
available servers is also available via a web API.
68+
69+
For example, if you start some more Pods in your cluster, the Karpenter
70+
tool might buy a new node that is larger than one of the nodes you are
71+
already using, and then shut down an existing node once the new node
72+
is in service.
73+
74+
#### Cloud provider integrations {#karpenter-providers}
75+
76+
{{% thirdparty-content vendor="true" %}}
77+
78+
There are integrations available between Karpenter's core and the following
79+
cloud providers:
80+
81+
- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws)
82+
- [Azure](https://github.com/Azure/karpenter-provider-azure)
83+
84+
85+
## Related components
86+
87+
### Descheduler
88+
89+
The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you
90+
consolidate Pods onto a smaller number of nodes, to help with automatic scale down
91+
when the cluster has space capacity.
92+
93+
### Sizing a workload based on cluster size
94+
95+
#### Cluster proportional autoscaler
96+
97+
For workloads that need to be scaled based on the size of the cluster (for example
98+
`cluster-dns` or other system components), you can use the
99+
[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).<br />
100+
101+
The Cluster Proportional Autoscaler watches the number of schedulable nodes
102+
and cores, and scales the number of replicas of the target workload accordingly.
103+
104+
#### Cluster proportional vertical autoscaler
105+
106+
If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
107+
the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
108+
This project is in **beta** and can be found on GitHub.
109+
110+
While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
111+
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
112+
in the cluster.
113+
114+
115+
## {{% heading "whatsnext" %}}
116+
117+
- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/)

0 commit comments

Comments
 (0)