This repository contains all the files required to run the test comparing Istio , Linkerd , Cilium, Kuma, AmbienMesh
this repository will use solutions to measure the usage of those agents, we will rely on:
- the OpenTelemetry Demo
- hipster-shop
All the observability data generated by the environment would be sent to Dynatrace.
The repo contains a load test scripts agains the opentelemetry demo. the Load test takes 2 hours. Always leave 2hours after launching the opentelemetry/loadtest_job.yaml
The following tools need to be install on your machine :
- jq
- kubectl
- git
- aws cli
- eksctl
- Helm
First run the following cmd:
ZONE=eu-west-3
NAME=isitobservable-smbench
OWNER=henrik.rexed- For all the tests that are not related to cilium ( istio, ambientmesh, traefikmesh, kuma, linkerd) :
sed -i '' "s,CLUSTER_NAME_TO_REPLACE,$NAME," cluster/cluster.yaml
sed -i '' "s,REGION_TO_REPLACE,$ZONE," cluster/cluster.yaml
sed -i '' "s,OWNER_TO_REPLACE,$OWNER," cluster/cluster.yaml
eksctl create cluster -f cluster/cluster.yaml- For the tests using cilium:
sed -i '' "s,CLUSTER_NAME_TO_REPLACE,$NAME," cluster/cluster_cilium.yaml
sed -i '' "s,REGION_TO_REPLACE,$ZONE," cluster/cluster_cilium.yaml
sed -i '' "s,OWNER_TO_REPLACE,$OWNER," cluster/cluster_cilium.yaml
eksctl create cluster -f cluster/cluster_cilium.yamlgit clone https://github.com/isitobservable/servicemeshsecuritybenchmark
cd servicemeshsecuritybenchmarkIf you don't have any Dynatrace tenant , then I suggest to create a trial using the following link : Dynatrace Trial
Once you have your Tenant save the Dynatrace tenant url in the variable DT_TENANT_URL (for example : https://dedededfrf.live.dynatrace.com)
DT_TENANT_URL=<YOUR TENANT Host>
The dynatrace operator will require to have several tokens:
- Token to deploy and configure the various components
- Token to ingest metrics and Traces
One for the operator having the following scope:
- Create ActiveGate tokens
- Read entities
- Read Settings
- Write Settings
- Access problem and event feed, metrics and topology
- Read configuration
- Write configuration
- Paas integration - installer downloader
Save the value of the token . We will use it later to store in a k8S secret
API_TOKEN=<YOUR TOKEN VALUE>Create a Dynatrace token with the following scope:
- Ingest metrics (metrics.ingest)
- Ingest logs (logs.ingest)
- Ingest events (events.ingest)
- Ingest OpenTelemetry
- Read metrics
DATA_INGEST_TOKEN=<YOUR TOKEN VALUE>The application will deploy the entire environment:
chmod 777 deployment.sh
TYPE=none
./deployment.sh --clustername "${NAME}" --dturl "${DT_TENANT_URL}" --dtingesttoken "${DATA_INGEST_TOKEN}" --dtoperatortoken "${API_TOKEN}" --type "${TYPE}"Wait 45min before launching the load test against the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoThe application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=kuma
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"Wait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoDeploy Policies
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl apply -f linkerd/server_oteldemo.yaml
kubectl apply -f kuma/meshtimeout.yaml
kubectl apply -f kuma/MeshRatelimt.yamlWait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoInstall the linker cli :
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh
export PATH=$HOME/.linkerd2/bin:$PATHThe application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=linkerd
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"
kubectl create secret generic dynatrace --from-literal=dynatrace_oltp_url="$DT_TENANT_URL" --from-literal=clustername="$NAME" --from-literal=dt_api_token="$DATA_INGEST_TOKEN" -n linkerd-jaeger
kubectl apply -f linkerd/linkerd-collector.yaml
kubectl apply -f linkerd/collector_deployment.yamlWait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoDeploy Policies
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl apply -f linkerd/server_oteldemo.yaml
kubectl apply -f linkerd/requestTimeout.yaml
kubectl apply -f linkerd/ratelimit.yamlWait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoThe application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl delete -f hipstershop/loadtest_job.yaml -n hipster-shop
OLD=$TYPE
TYPE=istio
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"Wait 45min before launching the load test against the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoNow let's run the same test with Policies
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl apply -f istio/request_timeout.yaml
kubectl apply -f istio/rate_limit.yamlWait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoThe application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl delete -f hipstershop/loadtest_job.yaml -n hipster-shop
OLD=$TYPE
TYPE=ambient
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"Wait 45min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demoCilium requires a specific cluster, so let's first delete our previous cluster, and create a new one.
sed -i '' "s,CLUSTER_NAME_TO_REPLACE,$NAME," cluster/cluster_cilium.yaml
sed -i '' "s,REGION_TO_REPLACE,$ZONE," cluster/cluster_cilium.yaml
sed -i '' "s,OWNER_TO_REPLACE,$OWNER," cluster/cluster_cilium.yaml
eksctl delete cluyster -f cluster/cluster.yaml
eksctl create cluster -f cluster/cluster_cilium.yamlWe need first to deploy cilium to have all the networking component ready :
helm install cilium cilium/cilium --version 1.17.1 \
--namespace kube-system \
--set eni.enabled=true \
--set ipam.mode=eni \
--set egressMasqueradeInterfaces=ens+ \
--set routingMode=native \
--set ingressController.enabled=true \
--set ingressController.loadbalancerMode=dedicated \
--set kubeProxyReplacement=true \
--set gatewayAPI.enableAppProtocol=true \
--set envoyConfig.enabled=true \
--set gatewayAPI.enabled=true \
--set prometheus.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set authentication.mutual.spire.enabled=true \
--set authentication.mutual.spire.install.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.enabled=true \
--set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"Once the cluster is up and running , deploy the environment :
chmod 777 update.sh
TYPE=cilium
./deployment.sh --clustername "${NAME}" --dturl "${DT_TENANT_URL}" --dtingesttoken "${DATA_INGEST_TOKEN}" --dtoperatortoken "${API_TOKEN}" --type "${TYPE}"




