4
4
5
5
<!-- toc -->
6
6
- [ Create a Kubernetes Cluster] ( #create-a-kubernetes-cluster )
7
- - [ Install release v0.24.9 and use Coscheduling] ( #install-release-v0249 -and-use-coscheduling )
7
+ - [ Install release v0.25.7 and use Coscheduling] ( #install-release-v0257 -and-use-coscheduling )
8
8
- [ As a second scheduler] ( #as-a-second-scheduler )
9
9
- [ As a single scheduler (replacing the vanilla default-scheduler)] ( #as-a-single-scheduler-replacing-the-vanilla-default-scheduler )
10
10
- [ Test Coscheduling] ( #test-coscheduling )
@@ -24,7 +24,7 @@ If you do not have a cluster yet, create one by using one of the following provi
24
24
* [ kubeadm] ( https://kubernetes.io/docs/admin/kubeadm/ )
25
25
* [ minikube] ( https://minikube.sigs.k8s.io/ )
26
26
27
- ## Install release v0.24.9 and use Coscheduling
27
+ ## Install release v0.25.7 and use Coscheduling
28
28
29
29
Note: we provide two ways to install the scheduler-plugin artifacts: as a second scheduler
30
30
and as a single scheduler. Their pros and cons are as below:
@@ -146,9 +146,9 @@ any vanilla Kubernetes scheduling capability. Instead, a lot of extra out-of-box
146
146
> - --kubeconfig=/etc/kubernetes/scheduler.conf
147
147
> - --leader-elect=true
148
148
19,20c20
149
- < image: registry.k8s.io/scheduler-plugins/kube-scheduler:v0.24.9
149
+ < image: registry.k8s.io/scheduler-plugins/kube-scheduler:v0.25.7
150
150
---
151
- > image: registry.k8s.io/kube-scheduler:v1.24.9
151
+ > image: registry.k8s.io/kube-scheduler:v1.25.7
152
152
50,52d49
153
153
< - mountPath: /etc/kubernetes/sched-cc.yaml
154
154
< name: sched-cc
@@ -160,14 +160,14 @@ any vanilla Kubernetes scheduling capability. Instead, a lot of extra out-of-box
160
160
< name: sched-cc
161
161
` ` `
162
162
163
- 1. Verify that kube-scheduler pod is running properly with a correct image: ` registry.k8s.io/scheduler-plugins/kube-scheduler:v0.24.9 `
163
+ 1. Verify that kube-scheduler pod is running properly with a correct image: ` registry.k8s.io/scheduler-plugins/kube-scheduler:v0.25.7 `
164
164
165
165
` ` ` bash
166
166
$ kubectl get pod -n kube-system | grep kube-scheduler
167
167
kube-scheduler-kind-control-plane 1/1 Running 0 3m27s
168
168
169
169
$ kubectl get pods -l component=kube-scheduler -n kube-system -o=jsonpath=" {.items[0].spec.containers[0].image}{'\n'}"
170
- registry.k8s.io/scheduler-plugins/kube-scheduler:v0.24.9
170
+ registry.k8s.io/scheduler-plugins/kube-scheduler:v0.25.7
171
171
` ` `
172
172
173
173
> ** ⚠️Troubleshooting:** If the kube-scheudler is not up, you may need to restart kubelet service inside the kind control plane (` systemctl restart kubelet.service` )
@@ -238,27 +238,26 @@ Now, we're able to verify how the coscheduling plugin works.
238
238
` ` ` bash
239
239
$ kubectl get pod
240
240
NAME READY STATUS RESTARTS AGE
241
- pause-58f7d7db67-7sqgp 0/1 Pending 0 9s
242
- pause-58f7d7db67-jbmfv 0/1 Pending 0 9s
241
+ pause-646dbcfb64-4zvt6 0/1 Pending 0 9s
242
+ pause-646dbcfb64-8kpg4 0/1 Pending 0 9s
243
243
` ` `
244
244
245
- 1. Now let' s delete the deployment to re-create it with replicas=3 , so as to qualify for `minMember`
245
+ 1. Now let' s scale the deployment up to have 3 replicas, so as to qualify for `minMember`
246
246
(i.e., 3) of the associated PodGroup:
247
247
248
248
```bash
249
- $ kubectl delete -f deploy.yaml && sed ' s/replicas: 2/replicas: 3/' deploy.yaml | kubectl apply -f -
250
- deployment.apps "pause" deleted
251
- deployment.apps/pause created
249
+ $ kubectl scale deploy pause --replicas=3
250
+ deployment.apps/pause scaled
252
251
```
253
252
254
253
And wait for a couple of seconds, it' s expected to see all Pods get into running state:
255
254
256
255
` ` ` bash
257
256
$ kubectl get pod
258
257
NAME READY STATUS RESTARTS AGE
259
- pause-64f5c9ccf4-kprg7 1/1 Running 0 8s
260
- pause-64f5c9ccf4-tc8lx 1/1 Running 0 8s
261
- pause-64f5c9ccf4-xrgkw 1/1 Running 0 8s
258
+ pause-646dbcfb64-4zvt6 1/1 Running 0 42s
259
+ pause-646dbcfb64-8kpg4 1/1 Running 0 42s
260
+ pause-646dbcfb64-npzcf 1/1 Running 0 8s
262
261
` ` `
263
262
264
263
1. You can also get the PodGroup' s spec via:
0 commit comments