-
Notifications
You must be signed in to change notification settings - Fork 31
Description
Title: SpinAppExecutor finalizer checks for dependent SpinApps across all namespaces instead of its own
I run SpinKube on AKS (1.32/1.33) as part of our platform stack. We deploy a SpinAppExecutor named containerd-shim-spin into each namespace that runs SpinApps — following the pattern from the Helm chart's bootstrap hook.
What happened
A smoke-test namespace was stuck in Terminating for 3 days. The SpinAppExecutor in that namespace had core.spinkube.dev/finalizer set and a deletionTimestamp, but the finalizer never cleared.
There were zero SpinApps in the smoke-test namespace:
$ kubectl -n smoke-test get spinapp
No resources found in smoke-test namespace.
The only SpinApp referencing an executor named containerd-shim-spin was in a completely different namespace (weekly-full-20260304). When I deleted that SpinApp, the executor in smoke-test immediately deleted and the namespace terminated.
Reproduction (verified on K8s 1.33.6, spin-operator v0.6.1)
# Setup: two namespaces, same-named executor in each, SpinApp only in ns-b
kubectl create ns repro-ns-a
kubectl create ns repro-ns-b
# Create executors (same name, different namespaces)
cat <<'EOF' | kubectl apply -f -
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinAppExecutor
metadata:
name: containerd-shim-spin
namespace: repro-ns-a
spec:
createDeployment: true
deploymentConfig:
runtimeClassName: wasmtime-spin-v2
---
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinAppExecutor
metadata:
name: containerd-shim-spin
namespace: repro-ns-b
spec:
createDeployment: true
deploymentConfig:
runtimeClassName: wasmtime-spin-v2
---
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: repro-app
namespace: repro-ns-b
spec:
image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.15.1"
replicas: 1
executor: containerd-shim-spin
EOF
# Verify: zero SpinApps in ns-a
kubectl get spinapp -n repro-ns-a
# No resources found in repro-ns-a namespace.
# Delete executor in ns-a
kubectl delete spinappexecutor containerd-shim-spin -n repro-ns-a --wait=false
# It's stuck:
kubectl get spinappexecutor containerd-shim-spin -n repro-ns-a -o jsonpath='{.metadata.deletionTimestamp}'
# 2026-03-06T15:54:13Z (finalizer won't clear)Operator logs:
ERROR Reconciler error {"controller":"spinappexecutor", "SpinAppExecutor":{"name":"containerd-shim-spin","namespace":"repro-ns-a"}, "error":"cannot delete SpinAppExecutor with dependent SpinApps"}
Deleting the SpinApp in repro-ns-b immediately clears the finalizer on the executor in repro-ns-a.
Cause
In handleDeletion(), the dependent SpinApp lookup is not scoped to the executor's namespace:
// internal/controller/spinappexecutor_controller.go:106
r.Client.List(ctx, &spinApps, client.MatchingFields{spinAppExecutorKey: executor.Name})This matches SpinApps by executor name across the entire cluster. Any SpinApp in any namespace referencing an executor with the same name blocks deletion of every other executor with that name.
Fix
Add client.InNamespace(executor.Namespace):
r.Client.List(ctx, &spinApps, client.InNamespace(executor.Namespace), client.MatchingFields{spinAppExecutorKey: executor.Name})I've verified this fix locally — built and deployed the patched operator, ran the same reproduction, and the executor in repro-ns-a deletes immediately while the SpinApp in repro-ns-b is unaffected.
PR with fix and regression test incoming.