Skip to content

Commit 273edd5

Browse files
committed
Updating latest tags to v0.6.0
1 parent 1ecb7a2 commit 273edd5

File tree

4 files changed

+30
-29
lines changed

4 files changed

+30
-29
lines changed

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ IMAGE_NAMES = perf nginx ttfr tool
44
IMAGES_PATH = images
55
REGISTRY?=quay.io
66
ORGANISATION?=tigeradev
7-
VERSION?=latest
7+
VERSION?=v0.6.0
88
E2E_CLUSTER_NAME?=tb-e2e
99

1010
.PHONY: all build test clean tool test-tool e2e-test clean-ttfr clean-e2e

README.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ It also provides a framework for us to extend later with other tests.
1212
`docker pull <image>`, `docker tag <private-image-name>`, `docker push <private-image-name>`
1313

1414
The images are:
15-
`quay.io/tigeradev/tiger-bench-nginx:latest`
16-
`quay.io/tigeradev/tiger-bench-perf:latest`
17-
`quay.io/tigeradev/tiger-bench:latest` - this is the tool itself.
15+
`quay.io/tigeradev/tiger-bench-nginx:v0.6.0`
16+
`quay.io/tigeradev/tiger-bench-perf:v0.6.0`
17+
`quay.io/tigeradev/tiger-bench:v0.6.0` - this is the tool itself.
1818

1919
1. Create a `testconfig.yaml` file, containing a list of test definitions you'd like to run (see example provided)
2020
1. Run the tool, substituting the image names in the command below if needed, and modifying the test parameters if desired:
@@ -28,9 +28,9 @@ It also provides a framework for us to extend later with other tests.
2828
-e AWS_ACCESS_KEY_ID \
2929
-e AWS_SESSION_TOKEN \
3030
-e LOG_LEVEL=INFO \
31-
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:latest" \
32-
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:latest" \
33-
quay.io/tigeradev/tiger-bench:latest
31+
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:v0.6.0" \
32+
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:v0.6.0" \
33+
quay.io/tigeradev/tiger-bench:v0.6.0
3434
```
3535
1. See results in the `results.json` file in your local directory!
3636

@@ -63,9 +63,9 @@ docker run --rm --net=host \
6363
-v $HOME/.aws:/root/.aws \
6464
-e AWS_SECRET_ACCESS_KEY \
6565
-e AWS_ACCESS_KEY_ID \
66-
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:latest" \
67-
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:latest" \
68-
quay.io/tigeradev/tiger-bench:latest
66+
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:v0.6.0" \
67+
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:v0.6.0" \
68+
quay.io/tigeradev/tiger-bench:v0.6.0
6969
```
7070

7171
The tool runs in the hosts network namespace to ensure it has the same access as a user running kubectl on the host.
@@ -128,7 +128,7 @@ There are 2 tests requested in this example config.
128128

129129
`testKind` is required - at present you can only ask for `"thruput-latency"` or `ttfr`
130130

131-
`numPolicies`, `numIdlePolicies`, `numServices`, `numPods` specify the standing config desired for this test. Standing config exists simply to "load" the cluster up with config. The number that you can create is limited by your cluster - e.g. you cannot create more standing pods than will fit on your cluster! `numPolicies` creates policies that apply to the test pods. `numIdlePolicies` creates policies that will NOT apply to the test pods.
131+
`numPolicies`, `numIdlePolicies`, `numServices`, `numPods` specify the standing config desired for this test. Standing config exists simply to "load" the cluster up with config. The number that you can create is limited by your cluster - e.g. you cannot create more standing pods than will fit on your cluster! `numPolicies` creates policies that apply to the test pods. `numIdlePolicies` creates policies that will NOT apply to the test pods.
132132

133133
`leaveStandingConfig` tells the tool whether it should leave or clean up the standing resources it created for this test. It is sometimes useful to leave standing config up between tests, especially if it takes a long time to set up.
134134

@@ -151,11 +151,11 @@ external: false
151151
`direct` is a boolean, which determines whether the test should run a direct pod-to-pod test.
152152
`service` is a boolean, which determines whether the test should run a pod-to-service-to-pod test.
153153
`external` is a boolean, which determines whether the test should run a test from whereever this test is being run to an externally exposed service.
154-
If `external=true`, you must also supply `ExternalIPOrFQDN`, `TestPort` and `ControlPort` (for a thruput-latency test) to tell the test the IP and ports it should connect to. The ExternalIPOrFQDN will be whatever is exposed to the world, and might be a LoadBalancer IP, or a node IP, or something else, depending on how you exposed the service. The Test and Control ports need to be the same as used on the test server pod (because the test tools were not designed to work in an environment with NAT).
154+
If `external=true`, you must also supply `ExternalIPOrFQDN`, `TestPort` and `ControlPort` (for a thruput-latency test) to tell the test the IP and ports it should connect to. The ExternalIPOrFQDN will be whatever is exposed to the world, and might be a LoadBalancer IP, or a node IP, or something else, depending on how you exposed the service. The Test and Control ports need to be the same as used on the test server pod (because the test tools were not designed to work in an environment with NAT).
155155

156-
Note that the tool will NOT expose the services for you, because there are too many different ways to expose services to the world. You will need to expose pods with the label `app: qperf` in the test namespace to the world for this test to work. An example of exposing these pods using NodePorts can be found in `external_service_example.yaml`. If you wanted to change that to use a LoadBalancer, simply change `type: NodePort` to `type: LoadBalancer`.
156+
Note that the tool will NOT expose the services for you, because there are too many different ways to expose services to the world. You will need to expose pods with the label `app: qperf` in the test namespace to the world for this test to work. An example of exposing these pods using NodePorts can be found in `external_service_example.yaml`. If you wanted to change that to use a LoadBalancer, simply change `type: NodePort` to `type: LoadBalancer`.
157157

158-
For `thruput-latency` tests, you will need to expose 2 ports from those pods: A TCP `TestPort` and a `ControlPort`. You must not map the port numbers between the pod and the external service, but they do NOT need to be consecutive. i.e. if you specify TestPort=32221, the pod will listen on port 32221 and whatever method you use to expose that service to the outside world must also use that port number.
158+
For `thruput-latency` tests, you will need to expose 2 ports from those pods: A TCP `TestPort` and a `ControlPort`. You must not map the port numbers between the pod and the external service, but they do NOT need to be consecutive. i.e. if you specify TestPort=32221, the pod will listen on port 32221 and whatever method you use to expose that service to the outside world must also use that port number.
159159

160160
A `ttfr` test may have the following additional config:
161161

@@ -164,11 +164,12 @@ A `ttfr` test may have the following additional config:
164164
TestPodsPerNode: 80
165165
Rate: 2.5
166166
```
167+
167168
The `TestPodsPerNode` setting controls the number of pods it will try to set up on each test node
168169

169-
The `Rate` is the rate at which it will send requests to set up pods, in pods per second. Note that the acheivable rate depends on a number of things, including the TestPodsPerNode setting (since it cannot set up more than TestPodsPerNode multiplied by the number of nodes with the test label, the tool will stall if all the permitted pods are in the process of starting or terminating). And that will depend on the speed of the kubernetes control plane, kubelet, etc.
170+
The `Rate` is the rate at which it will send requests to set up pods, in pods per second. Note that the acheivable rate depends on a number of things, including the TestPodsPerNode setting (since it cannot set up more than TestPodsPerNode multiplied by the number of nodes with the test label, the tool will stall if all the permitted pods are in the process of starting or terminating). And that will depend on the speed of the kubernetes control plane, kubelet, etc.
170171

171-
In the event that you ask for a rate higher than the tool can acheive, it will run at the maximum rate it can, while logging warnings that it is "unable to keep up with rate". If the problem is running out of pod slots, it will log that also, and you can fix it by either increasing the pods per node or giving more nodes the test label.
172+
In the event that you ask for a rate higher than the tool can acheive, it will run at the maximum rate it can, while logging warnings that it is "unable to keep up with rate". If the problem is running out of pod slots, it will log that also, and you can fix it by either increasing the pods per node or giving more nodes the test label.
172173

173174
### Settings which can reconfigure your cluster
174175

@@ -335,12 +336,11 @@ An example result from a "thruput-latency" test might look like:
335336
`ClusterDetails` contains information collected about the cluster at the time of the test.
336337
`thruput-latency` contains a statistical summary of the raw qperf results - latency and throughput for a direct pod-pod test and via a service. Units are given in the result.
337338

338-
339339
### The "Time To First Response" test
340340

341-
This "time to first response" (TTFR) test spins up a server pod on each node in the cluster, and then spins up client pods on each node in the cluster. The client pods start and send requests to the server pod, and record the amount of time it takes before they get a response. This is sometimes[1] a useful proxy for how long its taking for Calico to program the rules for that pod (since pods start with a deny-all rule and calico-node must program the correct rules before it can talk to anything). A better measure of the time it takes Calico to program rules for pods is to look in the [Felix Prometheus metrics](https://docs.tigera.io/calico/latest/reference/felix/prometheus#common-data-plane-metrics) at the `felix_int_dataplane_apply_time_seconds` statistic.
341+
This "time to first response" (TTFR) test spins up a server pod on each node in the cluster, and then spins up client pods on each node in the cluster. The client pods start and send requests to the server pod, and record the amount of time it takes before they get a response. This is sometimes[1] a useful proxy for how long its taking for Calico to program the rules for that pod (since pods start with a deny-all rule and calico-node must program the correct rules before it can talk to anything). A better measure of the time it takes Calico to program rules for pods is to look in the [Felix Prometheus metrics](https://docs.tigera.io/calico/latest/reference/felix/prometheus#common-data-plane-metrics) at the `felix_int_dataplane_apply_time_seconds` statistic.
342342

343-
[1] if `linuxPolicySetupTimeoutSeconds` is set in the CalicoNetworkSpec in the Installation resource, then pod startup will be delayed until policy is applied. This can be handy if your application pod wants its first request to always succeed. This is a Calico-specific feature that is not part of the CNI spec. See the [Calico documentation](https://docs.tigera.io/calico/latest/reference/configure-cni-plugins#enabling-policy-setup-timeout) for more information on this feature and how to enable it.
343+
[1] if `linuxPolicySetupTimeoutSeconds` is set in the CalicoNetworkSpec in the Installation resource, then pod startup will be delayed until policy is applied. This can be handy if your application pod wants its first request to always succeed. This is a Calico-specific feature that is not part of the CNI spec. See the [Calico documentation](https://docs.tigera.io/calico/latest/reference/configure-cni-plugins#enabling-policy-setup-timeout) for more information on this feature and how to enable it.
344344

345345
For a "ttfr" test, the tool will:
346346

@@ -350,18 +350,19 @@ For a "ttfr" test, the tool will:
350350
- Wait for those to come up.
351351
- Create a server pod on each node with the `tigera.io/test-nodepool=default-pool` label
352352
- Loop round:
353-
- creating test pods on those nodes, at the rate defined by Rate in the test config
354-
- test pods are then checked until they produce a ttfr result in their log, which is read by the tool
355-
- and a delete is sent for the test pod.
353+
- creating test pods on those nodes, at the rate defined by Rate in the test config
354+
- test pods are then checked until they produce a ttfr result in their log, which is read by the tool
355+
- and a delete is sent for the test pod.
356356
- ttfr results are recorded
357357
- Collate results and compute min/max/average/50/75/90/99th percentiles
358358
- Output that summary into a JSON format results file.
359359
- Optionally delete the test namespace (which will cause all test resources within it to be deleted)
360360
- Wait for everything to finish being cleaned up.
361361

362-
This test measures Time to First Response in seconds. i.e. the time between a pod starting up, and it getting a response from a server pod on the same node.
362+
This test measures Time to First Response in seconds. i.e. the time between a pod starting up, and it getting a response from a server pod on the same node.
363363

364364
An example result from a "ttfr" test might look like:
365+
365366
```
366367
[
367368
{
@@ -425,4 +426,4 @@ An example result from a "ttfr" test might look like:
425426

426427
`config` contains the configuration requested in the test definition.
427428
`ClusterDetails` contains information collected about the cluster at the time of the test.
428-
`ttfr` contains a statistical summary of the raw results. Units are given in the result.
429+
`ttfr` contains a statistical summary of the raw results. Units are given in the result.

pkg/config/config.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,9 @@ type Config struct {
5858
ProxyAddress string `envconfig:"HTTP_PROXY" default:""`
5959
TestConfigFile string `envconfig:"TESTCONFIGFILE" required:"true"`
6060
LogLevel string `envconfig:"LOG_LEVEL" default:"info"`
61-
WebServerImage string `envconfig:"WEBSERVER_IMAGE" default:"quay.io/tigeradev/tiger-bench-nginx:latest"`
62-
PerfImage string `envconfig:"PERF_IMAGE" default:"quay.io/tigeradev/tiger-bench-perf:latest"`
63-
TTFRImage string `envconfig:"TTFR_IMAGE" default:"quay.io/tigeradev/tiger-bench-ttfr:latest"`
61+
WebServerImage string `envconfig:"WEBSERVER_IMAGE" default:"quay.io/tigeradev/tiger-bench-nginx:v0.6.0"`
62+
PerfImage string `envconfig:"PERF_IMAGE" default:"quay.io/tigeradev/tiger-bench-perf:v0.6.0"`
63+
TTFRImage string `envconfig:"TTFR_IMAGE" default:"quay.io/tigeradev/tiger-bench-ttfr:v0.6.0"`
6464
TestConfigs testConfigs
6565
}
6666

run.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#!/bin/bash
22
set -ex
3-
docker build -t quay.io/tigeradev/tiger-bench:latest .
3+
docker build -t quay.io/tigeradev/tiger-bench:v0.6.0 .
44
docker run --rm --net=host \
55
-v "${PWD}":/results \
66
-v ${KUBECONFIG}:/kubeconfig \
@@ -12,4 +12,4 @@ docker run --rm --net=host \
1212
-e LOG_LEVEL=INFO \
1313
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:main" \
1414
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:main" \
15-
quay.io/tigeradev/tiger-bench:latest
15+
quay.io/tigeradev/tiger-bench:v0.6.0

0 commit comments

Comments
 (0)