Replies: 2 comments
-
Note I ran the following too: free -mh It seems to show on the pod the total as equal to the whole node's memory and you can see it did use 4.8 Gi even though I set request to 1.0Gi. |
Beta Was this translation helpful? Give feedback.
-
fyi: fyi: spec: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
For runner scaleset setup (i.e., controller and then runner scalesets with values.yaml files), we are seeing an issue.
A user runs something like yarn and it seems to grab all cpus of the node because it looks at os. It then does not even use all the cpus.
If I run lscpu on a runner pod, I see:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
I tried to set in runnerscaleset's values.yaml:
resources:
requests:
cpu: 1000m
memory: 1.0Gi
limits:
cpu: 4000m
memory: 8.0Gi
It still seems to see all the cpu of the node though as seen in the lscpu command.
How do we restrict cpu and memory to a pod so that one user's workflow job does not consume an entire node?
How does request and limits work? If I limit to one cpu and 1 gb of memory as shown above but it needs 4 Gb of memory, does that pod have the ability to get the memory it needs?
Beta Was this translation helpful? Give feedback.
All reactions