Skip to content

Commit 6ece30d

Browse files
Now checks if entries array has items (#74)
* Now checks if entries array has items * Fix helm lint * Upgrade CI Helm version and revert unnecessary template changes The helm lint failures in CI were caused by using outdated Helm 3.3.4 (from 2020). The template syntax `ne .Values.server.enabled false` works correctly with modern Helm versions. Changes: - Upgraded CI Helm version from 3.3.4 to 3.19.0 (latest stable) - Reverted commit b82a8f2 template changes (not needed with modern Helm) Testing confirmed: - All helm lint tests pass with Helm 3.14+ and 3.19.0 - Customer issue (empty dataConfigSources entries) remains fixed - Original template syntax is compatible with modern Helm 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Upgrade k3d-action and k3s version to fix CI cluster startup The k3d cluster startup was failing due to using outdated versions: - k3d-action v1.4.0 (from 2021) → v2.4.0 (latest, Jan 2024) - k3s v1.18.18 (from 2021) → v1.28.8 (modern, stable) - k3d config API v1alpha2 → v1alpha4 (current) Changes: - Upgraded k3d-action from v1.4.0 to v2.4.0 - Updated k3d config API version from v1alpha2 to v1alpha4 - Upgraded k3s image from v1.18.18-k3s1 to v1.28.8-k3s1 This should resolve the "server is currently unable to handle the request" errors seen during cluster startup in CI. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix k3d config schema for v1alpha4 compatibility The k3d v1alpha4 schema requires different structure: - name field must be under metadata - extraServerArgs changed to extraArgs with arg/nodeFilters structure - Each disable flag must be a separate arg entry Changes: - Moved name under metadata wrapper - Changed extraServerArgs to extraArgs - Split --disable flags into separate arg entries with nodeFilters - Each arg targets server:* nodes This fixes the schema validation errors: - "Additional property name is not allowed" - "Additional property extraServerArgs is not allowed" 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix e2e test service names to match Helm naming convention The e2e tests were using incorrect service names that didn't match the actual services created by Helm. Issue: When release name != chart name, Helm generates service names as: {release-name}-{chart-name}-{component} In this case: - Release: myopal - Chart: opal - Generated names: myopal-opal-server, myopal-opal-client But tests were looking for: myopal-server, myopal-client Changes: - deploy.sh: myopal-server → myopal-opal-server - test.sh: myopal-client → myopal-opal-client - test.sh: myopal-server → myopal-opal-server This fixes the "service not found" error in CI e2e tests. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix e2e test to use external curl pod instead of exec into client The OPAL client container doesn't have curl installed, causing the test to fail when trying to exec curl commands inside the container. Error: exec: "curl": executable file not found in $PATH Solution: - Use kubectl run with curlimages/curl image to query OPA from outside - Changed DATA_URL from localhost:8181 to service name myopal-opal-client:8181 - This matches the pattern used in templates/tests/e2e.yaml Changes: - Replaced kubectl exec with kubectl run --rm for curl commands - Updated URL to use service name instead of localhost 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix bash test syntax by quoting command substitutions * Improve e2e test debugging with echo statements * Fix e2e test to match actual OPA response format * Fix e2e test empty data check to match OPA behavior * Filter out kubectl pod deletion messages from curl output --------- Co-authored-by: Claude <noreply@anthropic.com>
1 parent 4eba6ae commit 6ece30d

File tree

6 files changed

+38
-16
lines changed

6 files changed

+38
-16
lines changed

.github/workflows/master.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,13 @@ jobs:
1616
- uses: actions/checkout@v2
1717
- uses: azure/setup-helm@v1
1818
with:
19-
version: "3.3.4"
19+
version: "3.19.0"
2020
- name: helm lint
2121
run: |
2222
jq --version
2323
./test/linter/test.sh
2424
- name: start k8s with k3d
25-
uses: AbsaOSS/k3d-action@v1.4.0
25+
uses: AbsaOSS/k3d-action@v2.4.0
2626
with:
2727
cluster-name: "opal"
2828
use-default-registry: false

templates/deployment-client.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ spec:
6666
- name: OPAL_SERVER_URL
6767
value: {{ printf "http://%s:%v" (include "opal.serverName" .) .Values.server.port | quote }}
6868
{{- end}}
69-
{{- if not (or (.Values.server.dataConfigSources.external_source_url) (.Values.server.dataConfigSources.config) (hasKey .Values.client.extraEnv "OPAL_DATA_UPDATER_ENABLED") ) }}
69+
{{- if not (or (.Values.server.dataConfigSources.external_source_url) (and .Values.server.dataConfigSources.config .Values.server.dataConfigSources.config.entries) (hasKey .Values.client.extraEnv "OPAL_DATA_UPDATER_ENABLED") ) }}
7070
- name: OPAL_DATA_UPDATER_ENABLED
7171
value: "False"
7272
{{- end }}

templates/deployment-server.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ spec:
124124
{{- end }}
125125
- name: UVICORN_NUM_WORKERS
126126
value: {{ .Values.server.uvicornWorkers | quote }}
127-
{{- if or .Values.server.dataConfigSources.config .Values.server.dataConfigSources.external_source_url }}
127+
{{- if or .Values.server.dataConfigSources.external_source_url (and .Values.server.dataConfigSources.config .Values.server.dataConfigSources.config.entries) }}
128128
- name: OPAL_DATA_CONFIG_SOURCES
129129
value: {{ .Values.server.dataConfigSources | toRawJson | squote }}
130130
{{- end}}

test/e2e/deploy.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ else
1313
--set server.policyRepoUrl='//opt/e2e/policy-repo.git'
1414
fi
1515

16-
kubectl logs -n opal service/myopal-server git-init
16+
kubectl logs -n opal service/myopal-opal-server git-init

test/e2e/k3d.yaml

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
1-
apiVersion: k3d.io/v1alpha2
1+
apiVersion: k3d.io/v1alpha4
22
kind: Simple
3-
name: k3d
4-
image: rancher/k3s:v1.18.18-k3s1
3+
metadata:
4+
name: k3d
5+
image: rancher/k3s:v1.28.8-k3s1
56
options:
67
k3d:
78
wait: true
89
disableLoadbalancer: true
910
k3s:
10-
extraServerArgs:
11-
- "--disable=metrics-server,servicelb,traefik"
11+
extraArgs:
12+
- arg: --disable=metrics-server
13+
nodeFilters:
14+
- server:*
15+
- arg: --disable=servicelb
16+
nodeFilters:
17+
- server:*
18+
- arg: --disable=traefik
19+
nodeFilters:
20+
- server:*

test/e2e/test.sh

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,28 @@ set -e
55

66
helm test -n opal --logs myopal
77

8-
DATA_URL='http://localhost:8181/v1/data'
8+
DATA_URL="http://myopal-opal-client:8181/v1/data"
99

10-
[ $(kubectl exec -n opal service/myopal-client -- curl -s ${DATA_URL}/users) != "{}" ]
10+
# Check that users data is present initially
11+
RESULT=$(kubectl run -n opal curl-test --image=curlimages/curl:latest --rm -i --restart=Never -- curl -s ${DATA_URL}/users 2>&1 | grep -v "pod.*deleted")
12+
echo "Initial users: $RESULT"
13+
echo "$RESULT" | grep -q '"result"'
14+
15+
# Run the update script
1116
if [ -z $MSYSTEM ]; then
12-
kubectl exec -n opal service/myopal-server -- /opt/e2e/policy-repo-data/upd.sh
17+
kubectl exec -n opal service/myopal-opal-server -- /opt/e2e/policy-repo-data/upd.sh
1318
else
14-
kubectl exec -n opal service/myopal-server -- //opt/e2e/policy-repo-data/upd.sh
19+
kubectl exec -n opal service/myopal-opal-server -- //opt/e2e/policy-repo-data/upd.sh
1520
fi
1621

1722
sleep 7
18-
[ $(kubectl exec -n opal service/myopal-client -- curl -s ${DATA_URL}/users) == "{}" ]
19-
[ $(kubectl exec -n opal service/myopal-client -- curl -s ${DATA_URL}/losers) != "{}" ]
23+
24+
# Check that users data is empty after update (OPA returns {} when data is empty)
25+
RESULT=$(kubectl run -n opal curl-test --image=curlimages/curl:latest --rm -i --restart=Never -- curl -s ${DATA_URL}/users 2>&1 | grep -v "pod.*deleted")
26+
echo "After update users: $RESULT"
27+
[ "$RESULT" == '{}' ]
28+
29+
# Check that losers data is present
30+
RESULT=$(kubectl run -n opal curl-test --image=curlimages/curl:latest --rm -i --restart=Never -- curl -s ${DATA_URL}/losers 2>&1 | grep -v "pod.*deleted")
31+
echo "Losers data: $RESULT"
32+
echo "$RESULT" | grep -q '"result"'

0 commit comments

Comments
 (0)