Skip to content

Commit 31a1979

Browse files
DavidBuzatu-Marianustiugov
authored andcommitted
Added knative scripts for automated tests
Signed-off-by: David-Marian Buzatu <[email protected]> Removed redundant logging Signed-off-by: David-Marian Buzatu <[email protected]> Updated readme and removed sudo from script runs Signed-off-by: David-Marian Buzatu <[email protected]> Signed-off-by: davidbuzatu-marian <[email protected]>
1 parent ce2a8bc commit 31a1979

File tree

6 files changed

+197
-6
lines changed

6 files changed

+197
-6
lines changed

.github/workflows/stargz_tests.yml

Lines changed: 32 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,19 +22,46 @@ env:
2222
jobs:
2323
stargz-container-test:
2424
name: Test running stargz-based image using kn
25+
env:
26+
KIND_VERSION: v0.14.0
27+
K8S_VERSION: v1.23
28+
YAML_DIR: workloads/container
2529
runs-on: ubuntu-20.04
26-
steps:
30+
strategy:
31+
fail-fast: false
32+
matrix:
33+
service:
34+
[
35+
trace_func_go,
36+
]
2737

38+
steps:
2839
- name: Set up Go 1.18
2940
uses: actions/setup-go@v3
3041
with:
3142
go-version: 1.18
3243

3344
- name: Check out code into the Go module directory
3445
uses: actions/checkout@v3
46+
- name: Create k8s Kind Cluster
47+
run: bash ./scripts/stargz/01-kind.sh
48+
49+
- name: Install Serving
50+
run: bash ./scripts/stargz/02-serving.sh
51+
52+
- name: Install Kourier
53+
run: bash ./scripts/stargz/02-kourier.sh
54+
55+
- name: Setup domain and autoscaler
56+
run: |
57+
INGRESS_HOST="127.0.0.1"
58+
KNATIVE_DOMAIN=$INGRESS_HOST.sslip.io
59+
kubectl patch configmap -n knative-serving config-domain -p "{\"data\": {\"$KNATIVE_DOMAIN\": \"\"}}"
60+
kubectl patch configmap -n knative-serving config-autoscaler -p "{\"data\": {\"allow-zero-initial-scale\": \"true\"}}"
61+
3562
3663
- name: Setup stock-only node
37-
run: sudo ./scripts/cloudlab/setup_node.sh stock-only use-stargz
64+
run: ./scripts/cloudlab/setup_node.sh stock-only use-stargz
3865

3966
- name: Check containerd service is running
4067
run: sudo screen -list | grep "containerd"
@@ -43,11 +70,10 @@ jobs:
4370
run: sudo systemctl is-active --quiet stargz-snapshotter
4471

4572
- name: Setup single-node cluster
46-
run: sudo ./scripts/cluster/create_one_node_cluster.sh stock-only
73+
run: ./scripts/cluster/create_one_node_cluster.sh stock-only
4774

4875
- name: Run test container with kn
49-
run: sudo kn service apply stargz-test -f ./configs/knative_workloads/stargz-node.yaml --concurrency-target 1
76+
run: kn service apply stargz-test -f ./configs/knative_workloads/stargz-node.yaml --concurrency-target 1
5077

5178
- name: Curl container
52-
run: curl http://stargz-test.default.192.168.1.240.sslip.io | grep "Hello World"
53-
79+
run: curl http://stargz-test.default.192.168.1.240.sslip.io | grep "Hello World"

docs/stargz/stargz_guide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ This guide describes how to run stargz images on a single-cluster node using Kna
55
## Creating stargz images
66
[eStargz](https://github.com/containerd/stargz-snapshotter/tree/cmd/v0.12.1) is a lazily-pullable image format developed to improve the performance of container boot-ups by making better usage of the layering structure of container images. The image format is compatible to [OCI](https://github.com/opencontainers/image-spec/)/[Docker](https://github.com/moby/moby/blob/master/image/spec/v1.2.md) images, therefore it allows pushing images to standard container registries.
77

8+
Standard docker images store their layers as individual tars, one for each layer. This format does not allow to locate individual files without first unzipping the whole layer, which implies sending the tar layer first, and then unzipping it and finding the file. This inefficiency is resolved by the `stargz` format for layers, which instead takes each file and tars it on its own (with the exception of big files, which get split into chunks that get a tar on their own) and finally zips all the tars into a big tar, therefore ensuring it remains a valid tar. With the aid of an index file at the end of the list of files, one can use the stargz format to seek a specific file without going through all the files in the layer.
9+
810
To build stargz images, we recommend following the [stargz snapshotter and stargz store guide](https://github.com/containerd/stargz-snapshotter/blob/cmd/v0.12.1/docs/INSTALL.md) and building images using the [ctr-remote](https://github.com/containerd/stargz-snapshotter/tree/cmd/v0.12.1#creating-estargz-images-using-ctr-remote) CLI tool. We recommend serving images through DockerHub.
911

1012
## Cluster setup for stargz

scripts/stargz/01-kind.sh

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
#!/usr/bin/env bash
2+
3+
set -eo pipefail
4+
5+
kindVersion=$(kind version);
6+
K8S_VERSION=${k8sVersion:-v1.23.4@sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9}
7+
KIND_BASE=${KIND_BASE:-kindest/node}
8+
CLUSTER_NAME=${KIND_CLUSTER_NAME:-knative}
9+
10+
echo "KinD version is ${kindVersion}"
11+
if [[ ! $kindVersion =~ "${KIND_VERSION}." ]]; then
12+
echo "WARNING: Please make sure you are using KinD version ${KIND_VERSION}.x, download from https://github.com/kubernetes-sigs/kind/releases"
13+
fi
14+
15+
REPLY=continue
16+
KIND_EXIST="$(kind get clusters -q | grep ${CLUSTER_NAME} || true)"
17+
if [[ ${KIND_EXIST} ]] ; then
18+
echo "WARNING: Knative Cluster kind-${CLUSTER_NAME} already installed -> delete"
19+
kind delete cluster --name ${CLUSTER_NAME}
20+
fi
21+
22+
echo "Using image ${KIND_BASE}:${K8S_VERSION}"
23+
KIND_CLUSTER=$(mktemp)
24+
cat <<EOF | kind create cluster --name ${CLUSTER_NAME} --wait 120s --config=-
25+
kind: Cluster
26+
apiVersion: kind.x-k8s.io/v1alpha4
27+
nodes:
28+
- role: control-plane
29+
image: ${KIND_BASE}:${K8S_VERSION}
30+
extraPortMappings:
31+
- containerPort: 31080 # expose port 31380 of the node to port 80 on the host, later to be use by kourier or contour ingress
32+
listenAddress: 127.0.0.1
33+
hostPort: 80
34+
EOF

scripts/stargz/02-contour.sh

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
#!/usr/bin/env bash
2+
3+
set -eo pipefail
4+
set -u
5+
6+
KNATIVE_NET_CONTOUR_VERSION=${KNATIVE_NET_CONTOUR_VERSION:-1.4.0}
7+
8+
## INSTALL CONTOUR
9+
n=0
10+
until [ $n -ge 2 ]; do
11+
kubectl apply -f https://github.com/knative-sandbox/net-contour/releases/download/knative-v${KNATIVE_NET_CONTOUR_VERSION}/contour.yaml && break
12+
n=$[$n+1]
13+
sleep 5
14+
done
15+
kubectl wait --for=condition=Established --all crd
16+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n contour-internal
17+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n contour-external
18+
19+
## INSTALL NET CONTOUR
20+
n=0
21+
until [ $n -ge 2 ]; do
22+
kubectl apply -f https://github.com/knative-sandbox/net-contour/releases/download/knative-v${KNATIVE_NET_CONTOUR_VERSION}/net-contour.yaml && break
23+
n=$[$n+1]
24+
sleep 5
25+
done
26+
# deployment for net-contour gets deployed to namespace knative-serving
27+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n knative-serving
28+
29+
# Configure Knative to use this ingress
30+
kubectl patch configmap/config-network \
31+
--namespace knative-serving \
32+
--type merge \
33+
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
34+
35+
36+
cat <<EOF | kubectl apply -f -
37+
apiVersion: v1
38+
kind: Service
39+
metadata:
40+
name: contour-ingress
41+
namespace: contour-external
42+
labels:
43+
networking.knative.dev/ingress-provider: contour
44+
spec:
45+
type: NodePort
46+
selector:
47+
app: envoy
48+
ports:
49+
- name: http
50+
nodePort: 31080
51+
port: 80
52+
targetPort: 8080
53+
EOF

scripts/stargz/02-kourier.sh

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
#!/usr/bin/env bash
2+
3+
set -eo pipefail
4+
set -u
5+
6+
KNATIVE_NET_KOURIER_VERSION=${KNATIVE_NET_KOURIER_VERSION:-1.4.0}
7+
8+
## INSTALL KOURIER
9+
n=0
10+
until [ $n -ge 3 ]; do
11+
kubectl apply -f https://github.com/knative-sandbox/net-kourier/releases/download/knative-v${KNATIVE_NET_KOURIER_VERSION}/kourier.yaml && break
12+
echo "Kourier failed to install on first try"
13+
n=$[$n+1]
14+
sleep 10
15+
done
16+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n kourier-system
17+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n knative-serving
18+
19+
# Configure Knative to use this ingress
20+
kubectl patch configmap/config-network \
21+
--namespace knative-serving \
22+
--type merge \
23+
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
24+
25+
26+
cat <<EOF | kubectl apply -f -
27+
apiVersion: v1
28+
kind: Service
29+
metadata:
30+
name: kourier-ingress
31+
namespace: kourier-system
32+
labels:
33+
networking.knative.dev/ingress-provider: kourier
34+
spec:
35+
type: NodePort
36+
selector:
37+
app: 3scale-kourier-gateway
38+
ports:
39+
- name: http2
40+
nodePort: 31080
41+
port: 80
42+
targetPort: 8080
43+
EOF

scripts/stargz/02-serving.sh

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
#!/usr/bin/env bash
2+
3+
set -eo pipefail
4+
set -u
5+
6+
KNATIVE_VERSION=${KNATIVE_VERSION:-1.4.0}
7+
8+
wget -q https://github.com/knative/client/releases/download/knative-v${KNATIVE_VERSION}/kn-linux-amd64
9+
mv kn-linux-amd64 kn && chmod +x kn
10+
mv kn /usr/local/bin
11+
12+
n=0
13+
set +e
14+
until [ $n -ge 2 ]; do
15+
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v${KNATIVE_VERSION}/serving-crds.yaml && break
16+
echo "Serving CRDs failed to install on first try"
17+
n=$[$n+1]
18+
sleep 5
19+
done
20+
set -e
21+
kubectl wait --for=condition=Established --all crd
22+
23+
n=0
24+
set +e
25+
until [ $n -ge 2 ]; do
26+
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v${KNATIVE_VERSION}/serving-core.yaml && break
27+
echo "Serving Core failed to install on first try"
28+
n=$[$n+1]
29+
sleep 5
30+
done
31+
set -e
32+
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n knative-serving
33+

0 commit comments

Comments
 (0)