Pods getting Evicted during a GitHub action run #1842
Replies: 3 comments
-
The custom image is installing docker-compose and aws cli.
|
Beta Was this translation helpful? Give feedback.
-
bump |
Beta Was this translation helpful? Give feedback.
-
I am still facing this issue. Was looking into this and found that my pods are using more memory than required. [root@ip-10-70-111-97 ~]# kubectl top pods -A | grep runner The config file limits it to 6 GB(modified it last week actually from 4GB listed above) and yet the runner is running at 7.5 GB of memory. This maybe due to a memory leak maybe? I ran the following command inside the runner pod itself and got the below response. [root@ip-10-70-111-97 ~]# kubectl exec -it runner-deploy-tw8bh-mr828 -n actions-runner sh Which is around 2 GB compared to the 7.5GB shown by kubectl top pods. When the pods hits around 10GB, the pod is evicted. Also, No jobs were running when getting that output. Right now. I am running a cronjob to delete the pods and recreate the pods every 8 hours. But I would appreciate if there is any other fix we can include. Looking for any help I can get |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Randomly I see some of the pods getting evicted(happens to atleast 2 pods per day). I have set resource limits and Priority class as well but still they were getting evicted.
Updated the runners to v2.296.2-ubuntu-20.04 version two days ago to see if it fixes the issue.
Currently the runnerdeployment file looks like this:
I had removed priority class after the update since it made zero sense having it anyway. Tried increasing the memory and CPU as well which also didnt fix the issue. Need help as we have to re-run our Workflows which are causing a big issue in some GitHub CD's.
Beta Was this translation helpful? Give feedback.
All reactions