Description
I notice my buildkit memory usage is well below the requested memory configurations on my kubernetes cluster, but the buildctl docker builds calls timed out.
After checking the traceID, i notice at a certain step of the dockerfile runing pip installs... it got killed with exit code 137, pod memory still has a lot to spare
Wanted to know if anyone has any idea how this could have happened?
A little more context
My k8s nodes are using cgroup v2
I'm running buildkitd with - --oci-worker-no-process-sandbox
Image tag: v0.24.0-rootless