You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Add this script into chkconfig, to let it run automatically after the instance is started.
163
+
```
164
+
chmod +x /etc/rc.d/init.d/init-k8s.sh
165
+
chkconfig --add /etc/rc.d/init.d/init-k8s.sh
166
+
chkconfig /etc/rc.d/init.d/init-k8s.sh on
167
+
```
168
+
- Copy `~/.kube/config` from master node to this ECS `~./kube/config` to setup kubectl on this instance.
169
+
170
+
- Go to Huawei Cloud `Image Management` Service and click on `Create Image`. Select type`System disk image`, selectyour ECS instance as `Source`, then give it a name and then create.
171
+
172
+
- Remember this ECS instance ID since it will be used later.
173
+
174
+
### 7. Create AS Group
175
+
- Follow the Huawei cloud instruction to create an AS Group.
176
+
- Create an AS Configuration, and selectprivate image which we just created.
177
+
- While creating the `AS Configuration`, add the following script into `Advanced Settings`.
Find the as endpoint for different regions [here](https://developer.huaweicloud.com/endpoint?AS),
219
+
220
+
For example, for region `cn-north-4`, the endpoint is
221
+
```
222
+
as.cn-north-4.myhuaweicloud.com
223
+
```
224
+
225
+
- `ecs-endpoint`
226
+
227
+
Find the ecs endpoint for different regions [here](https://developer.huaweicloud.com/endpoint?ECS),
228
+
229
+
For example, for region `cn-north-4`, the endpoint is
230
+
```
231
+
ecs.cn-north-4.myhuaweicloud.com
232
+
```
233
+
102
234
- `project-id`
103
235
104
236
Follow this link to find the project-id: [Obtaining a Project ID](https://support.huaweicloud.com/en-us/api-servicestage/servicestage_api_0023.html)
@@ -127,19 +259,15 @@ and [My Credentials](https://support.huaweicloud.com/en-us/usermanual-ca/ca_01_0
127
259
```
128
260
{Minimum number of nodes}:{Maximum number of nodes}:{Node pool name}
129
261
```
130
-
The above parameters should match the parameters of the node pool you created. Currently, Huawei ServiceStage only provides
131
-
autoscaling against a single node pool.
262
+
The above parameters should match the parameters of the AS Group you created.
132
263
133
264
More configuration options can be added to the cluster autoscaler, such as `scale-down-delay-after-add`, `scale-down-unneeded-time`, etc.
134
265
See available configuration options [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca).
135
266
136
267
#### Deploy cluster autoscaler on the cluster
137
268
1. Log in to a machine which can manage the cluster with `kubectl`.
138
269
139
-
Make sure the machine has kubectl access to the cluster. We recommend using a worker node to manage the cluster. Follow
140
-
the instructions for
141
-
[Connecting to a Kubernetes Cluster Using kubectl](https://support.huaweicloud.com/intl/en-us/usermanual-cce/cce_01_0107.html)
142
-
to set up kubectl access to the cluster if you cannot execute `kubectl` on your machine.
270
+
Make sure the machine has kubectl access to the cluster.
143
271
144
272
2. Create the Service Account:
145
273
```
@@ -163,25 +291,20 @@ kubectl get pods -n kube-system
163
291
```
164
292
165
293
To see whether it functions correctly, deploy a Service to the cluster, and increase and decrease workload to the
166
-
Service. Cluster autoscaler should be able to autoscale the node pool with `Autoscaler` on to accommodate the load.
294
+
Service. Cluster autoscaler should be able to autoscale the AS Group to accommodate the load.
167
295
168
296
A simple testing method is like this:
169
297
- Create a Service: listening for http request
170
298
171
-
- Create HPA or AOM policy for pods to be autoscaled
172
-
* AOM policy: To create an AOM policy, go into the deployment, click `Scaling` tag and click `Add Scaling Policy`
173
-
button on Huawei Cloud UI.
174
-
* HPA policy: There're two ways to create an HPA policy.
175
-
* Follow this instruction to create an HPA policy through UI:
176
-
[Scaling a Workload](https://support.huaweicloud.com/intl/en-us/usermanual-cce/cce_01_0208.html)
177
-
* Install [metrics server](https://github.com/kubernetes-sigs/metrics-server) by yourself and create an HPA policy
The above command creates an HPA policy on the deployment with target average cpu usage of 50%. The number of
306
+
pods will grow if average cpu usage is above 50%, and will shrink otherwise. The `min` and `max` parameters set
307
+
the minimum and maximum number of pods of this deployment.
185
308
- Generate load to the above service
186
309
187
310
Example tools for generating workload to an http service are:
@@ -196,30 +319,18 @@ A simple testing method is like this:
196
319
197
320
Feel free to use other tools which have a similar function.
198
321
199
-
- Wait for pods to be added: as load increases, more pods will be added by HPA or AOM
322
+
- Wait for pods to be added: as load increases, more pods will be added by HPA
200
323
201
324
- Wait for nodes to be added: when there's insufficient resource for additional pods, new nodes will be added to the
202
325
cluster by the cluster autoscaler
203
326
204
327
- Stop the load
205
328
206
-
- Wait for pods to be removed: as load decreases, pods will be removed by HPA or AOM
329
+
- Wait for pods to be removed: as load decreases, pods will be removed by HPA
207
330
208
331
- Wait for nodes to be removed: as pods being removed from nodes, several nodes will become underutilized or empty,
209
332
and will be removed by the cluster autoscaler
210
333
211
-
212
-
## Notes
213
-
214
-
1. Huawei ServiceStage does not yet support autoscaling against multiple node pools within a single cluster, but
215
-
this is currently under development. For now, make sure that there's only one node pool with `Autoscaler` label
216
-
on in the cluster.
217
-
2. If the version of the cluster is v1.15.6 or older, log statements similar to the following may be present in the
218
-
autoscaler pod logs:
219
-
```
220
-
E0402 13:25:05.472999 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: the server could not find the requested resource
221
-
```
222
-
This is normal and will be fixed by a future version of cluster.
0 commit comments