Skip to content

Commit 073b187

Browse files
authored
Merge branch 'main' into main
2 parents 6e62ef4 + 07a2a90 commit 073b187

File tree

129 files changed

+1310
-1936
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+1310
-1936
lines changed

.github/ISSUE_TEMPLATE/release-from-scratch-testing.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,3 +85,32 @@ stackablectl demo install <DEMO_NAME>
8585
# --- IMPORTANT ---
8686
# Run through the nightly demo instructions (refer to the list above).
8787
```
88+
89+
## List of Stacks
90+
91+
Some stacks are not used by demos, but still need testing in some way.
92+
93+
> [!TIP]
94+
> Some of the stacks have a [tutorial](https://docs.stackable.tech/home/nightly/tutorials/) to follow.
95+
96+
<!--
97+
The following list was generated by:
98+
99+
# Using the real `yq` that follows the same interface as `jq`
100+
yq -y --slurp '.[0] * .[1] | .allStacks - .usedStacks' \
101+
<(cat demos/demos-v2.yaml| yq '{usedStacks: [.demos[] | .stackableStack]}') \
102+
<(cat stacks/stacks-v2.yaml | yq '{allStacks: [.stacks | keys] | flatten}') \
103+
| sed -e 's/^- /- [ ] /g' \
104+
| sort
105+
-->
106+
107+
- [ ] monitoring
108+
- [ ] observability
109+
- [ ] openldap
110+
- [ ] tutorial-openldap
111+
112+
You can install the stack via:
113+
114+
```shell
115+
stackablectl stack install <STACK_NAME> --release dev
116+
```

.github/workflows/dev_nifi.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,5 +25,5 @@ jobs:
2525
image-name: nifi
2626
# TODO (@NickLarsenNZ): Use a versioned image with stackable0.0.0-dev or stackableXX.X.X so that
2727
# the demo is reproducable for the release and it will be automatically replaced for the release branch.
28-
image-version: 2.4.0-postgresql
28+
image-version: 2.6.0-postgresql
2929
containerfile-path: demos/signal-processing/Dockerfile-nifi
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
apiVersion: rbac.authorization.k8s.io/v1
3+
kind: ClusterRole
4+
metadata:
5+
name: airflow-demo-clusterrole
6+
rules:
7+
- apiGroups:
8+
- spark.stackable.tech
9+
resources:
10+
- sparkapplications
11+
verbs:
12+
- create
13+
- get
14+
- list
15+
- apiGroups:
16+
- apps
17+
resources:
18+
- statefulsets
19+
verbs:
20+
- get
21+
- watch
22+
- list
23+
- apiGroups:
24+
- ""
25+
resources:
26+
- persistentvolumeclaims
27+
verbs:
28+
- list
29+
- apiGroups:
30+
- ""
31+
resources:
32+
- pods
33+
verbs:
34+
- get
35+
- watch
36+
- list
37+
- apiGroups:
38+
- ""
39+
resources:
40+
- pods/exec
41+
verbs:
42+
- create

demos/airflow-scheduled-job/01-airflow-spark-clusterrole.yaml

Lines changed: 0 additions & 36 deletions
This file was deleted.

demos/airflow-scheduled-job/02-airflow-spark-clusterrolebinding.yaml renamed to demos/airflow-scheduled-job/02-airflow-demo-clusterrolebinding.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22
apiVersion: rbac.authorization.k8s.io/v1
33
kind: ClusterRoleBinding
44
metadata:
5-
name: airflow-spark-clusterrole-binding
5+
name: airflow-demo-clusterrole-binding
66
roleRef:
77
apiGroup: rbac.authorization.k8s.io
88
kind: ClusterRole
9-
name: airflow-spark-clusterrole
9+
name: airflow-demo-clusterrole
1010
subjects:
1111
- apiGroup: rbac.authorization.k8s.io
1212
kind: Group

demos/airflow-scheduled-job/03-enable-and-run-spark-dag.yaml

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,12 @@ spec:
99
containers:
1010
- name: start-pyspark-job
1111
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
12-
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
13-
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
14-
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that
15-
# all pods are reachable (albeit independent of each other).
12+
# N.B. it is possible for the scheduler to report that a DAG exists,
13+
# only for the worker task to fail if a pod is unexpectedly
14+
# restarted. The wait/watch steps below are not "water-tight" but add
15+
# a layer of stability by at least ensuring that the cluster is
16+
# initialized and ready and that all pods are reachable (albeit
17+
# independent of each other).
1618
command:
1719
- bash
1820
- -euo

demos/airflow-scheduled-job/04-enable-and-run-date-dag.yaml

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,12 @@ spec:
99
containers:
1010
- name: start-date-job
1111
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
12-
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
13-
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
14-
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that
15-
# all pods are reachable (albeit independent of each other).
12+
# N.B. it is possible for the scheduler to report that a DAG exists,
13+
# only for the worker task to fail if a pod is unexpectedly
14+
# restarted. The wait/watch steps below are not "water-tight" but add
15+
# a layer of stability by at least ensuring that the cluster is
16+
# initialized and ready and that all pods are reachable (albeit
17+
# independent of each other).
1618
command:
1719
- bash
1820
- -euo
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
---
2+
apiVersion: batch/v1
3+
kind: Job
4+
metadata:
5+
name: start-kafka-job
6+
spec:
7+
template:
8+
spec:
9+
containers:
10+
- name: start-kafka-job
11+
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
12+
env:
13+
- name: NAMESPACE
14+
valueFrom:
15+
fieldRef:
16+
fieldPath: metadata.namespace
17+
# N.B. it is possible for the scheduler to report that a DAG exists,
18+
# only for the worker task to fail if a pod is unexpectedly
19+
# restarted. The wait/watch steps below are not "water-tight" but add
20+
# a layer of stability by at least ensuring that the cluster is
21+
# initialized and ready and that all pods are reachable (albeit
22+
# independent of each other).
23+
command:
24+
- bash
25+
- -euo
26+
- pipefail
27+
- -c
28+
- |
29+
# Kafka: wait for cluster
30+
kubectl rollout status --watch statefulset/kafka-broker-default
31+
kubectl rollout status --watch statefulset/kafka-controller-default
32+
33+
# Kafka: create consumer offsets topics (required for group coordinator)
34+
kubectl exec kafka-broker-default-0 -c kafka -- \
35+
/stackable/kafka/bin/kafka-topics.sh \
36+
--bootstrap-server kafka-broker-default-0-listener-broker.$(NAMESPACE).svc.cluster.local:9093 \
37+
--create \
38+
--if-not-exists \
39+
--topic __consumer_offsets \
40+
--partitions 50 \
41+
--replication-factor 1 \
42+
--config cleanup.policy=compact \
43+
--command-config /stackable/config/client.properties
44+
45+
# Airflow: wait for cluster
46+
kubectl rollout status --watch statefulset/airflow-webserver-default
47+
kubectl rollout status --watch statefulset/airflow-scheduler-default
48+
49+
# Airflow: activate DAG
50+
AIRFLOW_ADMIN_PASSWORD=$(cat /airflow-credentials/adminUser.password)
51+
ACCESS_TOKEN=$(curl -XPOST http://airflow-webserver-default-headless:8080/auth/token -H 'Content-Type: application/json' -d '{"username": "admin", "password": "'$AIRFLOW_ADMIN_PASSWORD'"}' | jq -r .access_token)
52+
curl -H "Authorization: Bearer $ACCESS_TOKEN" -H 'Content-Type: application/json' -XPATCH http://airflow-webserver-default-headless:8080/api/v2/dags/kafka_watcher -d '{"is_paused": false}' | jq
53+
54+
# Kafka: produce a message to create the topic
55+
kubectl exec kafka-broker-default-0 -c kafka -- bash -c \
56+
'echo "Hello World at: $(date)" | /stackable/kafka/bin/kafka-console-producer.sh \
57+
--bootstrap-server kafka-broker-default-0-listener-broker.$(NAMESPACE).svc.cluster.local:9093 \
58+
--producer.config /stackable/config/client.properties \
59+
--topic test-topic'
60+
volumeMounts:
61+
- name: airflow-credentials
62+
mountPath: /airflow-credentials
63+
volumes:
64+
- name: airflow-credentials
65+
secret:
66+
secretName: airflow-credentials
67+
restartPolicy: OnFailure
68+
backoffLimit: 20 # give some time for the Airflow cluster to be available
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
apiVersion: batch/v1
3+
kind: Job
4+
metadata:
5+
name: start-users-job
6+
spec:
7+
template:
8+
spec:
9+
containers:
10+
- name: start-users-job
11+
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
12+
# N.B. it is possible for the scheduler to report that a DAG exists,
13+
# only for the worker task to fail if a pod is unexpectedly
14+
# restarted. The wait/watch steps below are not "water-tight" but add
15+
# a layer of stability by at least ensuring that the cluster is
16+
# initialized and ready and that all pods are reachable (albeit
17+
# independent of each other).
18+
command:
19+
- bash
20+
- -euo
21+
- pipefail
22+
- -c
23+
- |
24+
# Airflow: wait for cluster
25+
kubectl rollout status --watch statefulset/airflow-webserver-default
26+
kubectl rollout status --watch statefulset/airflow-scheduler-default
27+
28+
# Airflow: create users
29+
kubectl exec airflow-webserver-default-0 -- airflow users create \
30+
--username "jane.doe" \
31+
--firstname "Jane" \
32+
--lastname "Doe" \
33+
--email "[email protected]" \
34+
--password "jane.doe" \
35+
--role "User"
36+
37+
kubectl exec airflow-webserver-default-0 -- airflow users create \
38+
--username "richard.roe" \
39+
--firstname "Richard" \
40+
--lastname "Roe" \
41+
--email "[email protected]" \
42+
--password "richard.roe" \
43+
--role "User"
44+
volumeMounts:
45+
- name: airflow-credentials
46+
mountPath: /airflow-credentials
47+
volumes:
48+
- name: airflow-credentials
49+
secret:
50+
secretName: airflow-credentials
51+
restartPolicy: OnFailure
52+
backoffLimit: 20 # give some time for the Airflow cluster to be available

demos/argo-cd-git-ops/manifests/airflow/airflow.yaml

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,8 @@ metadata:
55
name: airflow
66
spec:
77
image:
8-
# Currently does not work with the kubernetes executor S3 logging
9-
# (and its still marked experimental as of release 25.7). See:
10-
# https://github.com/apache/airflow/issues/50583
11-
# https://github.com/apache/airflow/issues/52501
12-
# productVersion: 3.0.1
13-
productVersion: 2.10.5
8+
productVersion: 3.0.6
9+
pullPolicy: IfNotPresent
1410
clusterConfig:
1511
loadExamples: false
1612
exposeConfig: false
@@ -46,7 +42,7 @@ spec:
4642
mountPath: /stackable/minio-tls
4743
webservers:
4844
roleConfig:
49-
listenerClass: external-unstable
45+
listenerClass: external-stable
5046
envOverrides: &envOverrides
5147
AIRFLOW_CONN_KUBERNETES_IN_CLUSTER: "kubernetes://?__extra__=%7B%22extra__kubernetes__in_cluster%22%3A+true%2C+%22extra__kubernetes__kube_config%22%3A+%22%22%2C+%22extra__kubernetes__kube_config_path%22%3A+%22%22%2C+%22extra__kubernetes__namespace%22%3A+%22%22%7D"
5248
# Via sealed secrets and pod overrides, just kept for reference here

0 commit comments

Comments
 (0)