-
Notifications
You must be signed in to change notification settings - Fork 196
Description
Bug description
So I'm not sure if this is a bug or if this works as designed, feel free to switch the issue type to "feature request" if needed. It also looks like there are two separate problems involved, also feel free to split this up.
We are creating Tenants using ArgoCD. ArgoCD is using an admin access to the cluster. The developers are deploying their manifests with ArgoCD (with admin access) as well. The issue lies with the namespace creation within the tenant. The namespaces are created using ArgoCD as well and they have the Tenant label set on them. Unfortunately Capsule does not autodiscover namespaces belonging to Tenants. When we create namespaces this way and then check the namespaces of the Tenant, the new namespaces do not appear. This also means that the developers don't actually have access to their namespaces as they just have permissions for their Tenants and the namespaces are not assigned to the Tenants.
We did find that creating namespaces manually as a member of the Tenant does the trick but that's not very GitOps. We did not try using the Tenant UID because in some cases we are creating the namespaces at the same time as the Tenants, so the UID is not known yet and splitting the process into two steps is not practical either.
How to reproduce
The issue is not reproducible using just kubectl because attempting to attach a namespace to a non-owned Tenant resource (which would be the case as an admin) results in the following error:
Error from server (Forbidden): error when creating "STDIN": admission webhook "namespaces.tenants.projectcapsule.dev" denied the request: Cannot assign the desired namespace to a non-owned Tenant
So there exists a validation. I'm assuming that no autodiscovery is implemented because Capsule does not have to expect an Admin creating namespaces within the Tenants as this validation is enforced. However, this validation does not happen when creating objects using ArgoCD because there is no error and the namespaces are created with the Tenant labels.
- For simplicity, the Tenant can be created using kubectl
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: oil
spec:
owners:
- name: alice
kind: User
EOF
- Creating a new namespace as an existing user (who should have the permissions to create namespaces) should work and assign the namespace to the tenant
kubectl create --as alice -f - << EOF
apiVersion: v1
kind: Namespace
metadata:
labels:
capsule.clastix.io/tenant: oil
name: oil-1
EOF
kubectl get tenant oilshould then have one namespace
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR READY STATUS AGE
oil Active 1 True reconciled 91s
Like I mentioned, reproducing the case with ArgoCD is a bit more complicated. I did not attempt to reproduce it using a local setup. A local cluster with ArgoCD installed with its default settings should probably do. All you then have to do is to create the namespace manifest and deploy it within an ArgoCD-Application.
apiVersion: v1
kind: Namespace
metadata:
labels:
capsule.clastix.io/tenant: oil
name: oil-2
The namespace will be created without an error and have the label set correctly. But when you run kubectl get tenant oil, you will only find the manually created namespace attached to the Tenant and not the ArgoCD one. I do know that ArgoCD will try to create namespaces automatically. So it's possible that ArgoCD creates the namespace without any label, which would not cause a validation error, and then add the label from the namespace manifest it is deploying. This possibly avoids the validation and thus a namespace with a correct Tenant label is created but never assigned to its Tenant.
Expected behavior
I expect that when I create a namespace with a valid Tenant label using the Admin access, the namespace will be assigned to the Tenant.
Logs
There are no interesting logs for this case. I assume this is due to namespace creation with ArgoCD that entirely avoids the validation.
Additional context
- Capsule version: v0.11.2
- Capsule proxy version: capsule-proxy:v0.9.13
- Helm Chart version: same as the Capsule / Capsule-proxy versions
- Kubernetes version: 1.33.6