Skip to content

Commit 5cef2dd

Browse files
authored
Remove nodeSelector from k8s deployments (#2798)
nodeSelector can limit scheduling capabilities of k8s, which leads to delays in assigning new workloads. Since we do not require and particular machine for execution it can be removed.
1 parent 62b2585 commit 5cef2dd

File tree

2 files changed

+18
-6
lines changed

2 files changed

+18
-6
lines changed

jetty/kubernetes/nomulus-backend.yaml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ spec:
1414
service: backend
1515
spec:
1616
serviceAccountName: nomulus
17-
nodeSelector:
18-
cloud.google.com/compute-class: "Performance"
19-
cloud.google.com/machine-family: c4
2017
containers:
2118
- name: backend
2219
image: gcr.io/GCP_PROJECT/nomulus
@@ -25,9 +22,15 @@ spec:
2522
name: http
2623
resources:
2724
requests:
25+
# explicit pod-slots 0 is required in order to downgrade node
26+
# class from performance, which has implicit pod-slots 1
27+
cloud.google.com/pod-slots: 0
2828
cpu: "500m"
2929
memory: "1Gi"
3030
limits:
31+
# explicit pod-slots 0 is required in order to downgrade node
32+
# class from performance, which has implicit pod-slots 1
33+
cloud.google.com/pod-slots: 0
3134
cpu: "1000m"
3235
memory: "1Gi"
3336
args: [ENVIRONMENT]

jetty/kubernetes/nomulus-frontend.yaml

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,6 @@ spec:
1414
service: frontend
1515
spec:
1616
serviceAccountName: nomulus
17-
nodeSelector:
18-
cloud.google.com/compute-class: "Performance"
19-
cloud.google.com/machine-family: c4
2017
containers:
2118
- name: frontend
2219
image: gcr.io/GCP_PROJECT/nomulus
@@ -25,9 +22,15 @@ spec:
2522
name: http
2623
resources:
2724
requests:
25+
# explicit pod-slots 0 is required in order to downgrade node
26+
# class from performance, which has implicit pod-slots 1
27+
cloud.google.com/pod-slots: 0
2828
cpu: "1000m"
2929
memory: "1Gi"
3030
limits:
31+
# explicit pod-slots 0 is required in order to downgrade node
32+
# class from performance, which has implicit pod-slots 1
33+
cloud.google.com/pod-slots: 0
3134
cpu: "1000m"
3235
memory: "2Gi"
3336
args: [ENVIRONMENT]
@@ -53,9 +56,15 @@ spec:
5356
name: epp
5457
resources:
5558
requests:
59+
# explicit pod-slots 0 is required in order to downgrade node
60+
# class from performance, which has implicit pod-slots 1
61+
cloud.google.com/pod-slots: 0
5662
cpu: "1000m"
5763
memory: "512Mi"
5864
limits:
65+
# explicit pod-slots 0 is required in order to downgrade node
66+
# class from performance, which has implicit pod-slots 1
67+
cloud.google.com/pod-slots: 0
5968
cpu: "1000m"
6069
memory: "512Mi"
6170
args: [--env, PROXY_ENV, --log, --local]

0 commit comments

Comments
 (0)