Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .github/workflows/marketplace-smoke-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ jobs:
- name: create kind cluster
uses: helm/kind-action@v1

## Install SpinKube and verify Spin App runs

- name: helm install spinkube
run: |
helm install spinkube \
Expand Down Expand Up @@ -56,6 +58,25 @@ jobs:
- name: Verify curl
run: curl localhost:8083/hello

## Upgrade release and verify shim is re-installed

- name: delete any lingering kwasm jobs
run: kubectl -n spinkube delete job -l kwasm.sh/job=true

- name: helm upgrade spinkube and watch for annotate and install job completions
run: |
helm upgrade spinkube \
--wait \
--namespace spinkube \
--debug \
marketplace/charts/spinkube-azure-marketplace &
timeout 30s bash -c 'until [[ "$(kubectl -n spinkube get job -l job-name=spinkube-kwasm-annotate-nodes -o json | jq '.items[].spec.completions')" == "1" ]]; do sleep 2; done'
timeout 30s bash -c 'until [[ "$(kubectl -n spinkube get job -l kwasm.sh/job=true -o json | jq '.items[].spec.completions')" == "1" ]]; do sleep 2; done'

## Delete release

# First, verify deletion is blocked when Spin App resources exist

- name: helm delete spinkube
run: |
if helm delete spinkube --timeout 1m --namespace spinkube; then
Expand All @@ -80,6 +101,8 @@ jobs:
- name: Delete Spin App
run: kubectl delete spinapp simple-spinapp

# Now verify deletion proceeds and no resources remain

- name: helm delete spinkube
run: helm delete spinkube --timeout 1m --namespace spinkube

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ kind: Job
metadata:
name: "{{ .Release.Name }}-kwasm-annotate-nodes"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook": post-install,post-upgrade
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems fine to re-labeling nodes for both the post-install and post-upgrade scenarios. Have you tested this? I am asking because I am concerend about if we need to have buffer time of labeling kwasm-node from false to true.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested locally, though only on KinD. Maybe worth checking on AKS?

Oh, I could also add test coverage for this in our helm chart smoke test (or create a new workflow with scenarios like this)...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of adding more tests

"helm.sh/hook-weight": "-4"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
Expand All @@ -15,8 +15,13 @@ spec:
containers:
- name: kubectl
image: {{ printf "%s/%s:%s" .Values.global.azure.images.kubectl.registry .Values.global.azure.images.kubectl.image .Values.global.azure.images.kubectl.tag }}
command: ["kubectl"]
args: ["annotate", "node", "--all", "kwasm.sh/kwasm-node=true"]
command: ["/bin/sh", "-c"]
args:
- |-
echo "Annotating nodes with kwasm.sh/kwasm-node=false to reset installation of the shim for upgrade scenarios"
kubectl annotate node --all kwasm.sh/kwasm-node=false --overwrite
echo "Annotating nodes with kwasm.sh/kwasm-node=true to (re-)trigger installation of the shim"
kubectl annotate node --all kwasm.sh/kwasm-node=true --overwrite
restartPolicy: OnFailure
---
apiVersion: v1
Expand Down
Loading