-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
- after successful deploy. Another pair of
ckanandjobspods are created that hang there forever - However there are 2 working pods for both of them... and both of them doing fine in logs
kubectl get pods -n aa
NAME READY STATUS RESTARTS AGE
ckan-54854bd485-fx6j9 0/1 Init:0/2 0 18m
ckan-54f97b4cf4-fszlr 1/1 Running 0 5m38s
jobs-6d54db954d-tmgs8 0/1 Init:0/2 0 18m
jobs-db-7b7657ff88-gsj9n 1/1 Running 0 39m
nginx-6465d756b9-qsbdp 1/1 Running 0 39m
redis-c8c6ff95-zj68h 1/1 Running 0 39m
- Both have nearly similar logs from
kubect describe
ckan
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned aa/jobs-6d54db954d-tmgs8 to aks-default-44369806-vmss000001
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-a90fe26a-517c-11ea-9a7d-fe823327918f" Volume is already used by pod(s) ckan-54f97b4cf4-pbgrd, ckan-54854bd485-p98sl
Warning FailedMount 50s (x8 over 16m) kubelet, aks-default-44369806-vmss000001 Unable to mount volumes for pod "jobs-6d54db954d-tmgs8_aa(9f4d85bf-517f-11ea-9a7d-fe823327918f)": timeout expired waiting for volumes to attach or mount for pod "aa"/"jobs-6d54db954d-tmgs8". list of unmounted volumes=[ckan-data]. list of unattached volumes=[ckan-conf-secrets ckan-conf-templates ckan-data ckan-aa-operator-token-kgz4x]
Jobs
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned aa/ckan-54854bd485-fx6j9 to aks-default-44369806-vmss000000
Warning FailedMount 48s (x8 over 16m) kubelet, aks-default-44369806-vmss000000 Unable to mount volumes for pod "ckan-54854bd485-fx6j9_aa(9f343bf5-517f-11ea-9a7d-fe823327918f)": timeout expired waiting for volumes to attach or mount for pod "aa"/"ckan-54854bd485-fx6j9". list of unmounted volumes=[ckan-data]. list of unattached volumes=[ckan-conf-secrets ckan-conf-templates ckan-data ckan-aa-operator-token-kgz4x]
- Not sure bad gateway on https://aa.viderum.xyz/ and above is related. But I can see nothing from logs of working
ckanpod. Part of the logs from there
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service SchemingGroupsPlugin from environment pca
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service SchemingGroupsPlugin from environment pca
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service SchemingGroupsPlugin from environment pca
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service SchemingGroupsPlugin from environment pca
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service PagesPluginBase from environment pca
2020-02-17 12:34:17,459 INFO [pyutilib.component.core.pca] [MainThread] Removing service PagesPluginBase from environment pca
2020-02-17 12:34:18,681 INFO [ckan.config.environment] [MainThread] Loading templates from /usr/lib/ckan/src/ckan/ckan/templates
Metadata
Metadata
Assignees
Labels
No labels