Skip to content

Commit b1b9aa7

Browse files
Lakshmandhschall
authored andcommitted
Added basic documentation for all new benchmarks.
Signed-off-by: L Lakshmanan <[email protected]>
1 parent dce2050 commit b1b9aa7

File tree

5 files changed

+324
-0
lines changed

5 files changed

+324
-0
lines changed

benchmarks/compression/README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Compression Benchmark
2+
3+
The compression benchmark measures the performance of a serverless platform for the task of file compression. The benchmark uses the zlib library to compress and decompress input files. A specific file can be specified using the `--def_file` flag, however if not specified, the benchmark uses a default file.
4+
5+
The functionality is implemented in Python. The function is invoked using gRPC.
6+
7+
## Running this benchmark locally (using docker)
8+
9+
The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the compression-python function.
10+
1. Build or pull the function images using `make all-image` or `make pull`.
11+
### Invoke once
12+
2. Start the function with docker-compose
13+
```bash
14+
docker-compose -f ./yamls/docker-compose/dc-compression-python.yaml up
15+
```
16+
3. In a new terminal, invoke the interface function with grpcurl.
17+
```bash
18+
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
19+
```
20+
### Invoke multiple times
21+
2. Run the invoker
22+
```bash
23+
# build the invoker binary
24+
cd ../../tools/invoker
25+
make invoker
26+
27+
# Specify the hostname through "endpoints.json"
28+
echo '[ { "hostname": "localhost" } ]' > endpoints.json
29+
30+
# Start the invoker with a chosen RPS rate and time
31+
./invoker -port 50000 -dbg -time 10 -rps 1
32+
```
33+
34+
## Running this benchmark (using knative)
35+
36+
The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the compression-python function.
37+
1. Build or pull the function images using `make all-image` or `make pull`.
38+
2. Start the function with knative
39+
```bash
40+
kubectl apply -f ./yamls/knative/kn-compression-python.yaml
41+
```
42+
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
43+
### Invoke once
44+
4. In a new terminal, invoke the interface function with test-client.
45+
```bash
46+
./test-client --addr $URL:80 --name "Example text for Compression"
47+
```
48+
### Invoke multiple times
49+
4. Run the invoker
50+
```bash
51+
# build the invoker binary
52+
cd ../../tools/invoker
53+
make invoker
54+
55+
# Specify the hostname through "endpoints.json"
56+
echo '[ { "hostname": "$URL" } ]' > endpoints.json
57+
58+
# Start the invoker with a chosen RPS rate and time
59+
./invoker -port 80 -dbg -time 10 -rps 1
60+
```
61+
## Tracing
62+
63+
This Benchmark does not support distributed tracing for now.

benchmarks/image-rotate/README.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Image Rotate Benchmark
2+
3+
The image rotate benchmark rotates an input image by 90 degrees. An input image can be specified, but if nothing is given, a default image is used. This benchmark also utilises and depends on a database to store the images that can be used, which in turn uses MongoDB.
4+
5+
The `init-database.go` script runs when starting the function and populates the database with the images from the `images` folder.
6+
7+
The functionality is implemented in two runtimes, namely Go and Python. The function is invoked using gRPC.
8+
9+
## Running this benchmark locally (using docker)
10+
11+
The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the image-rotate-python function.
12+
1. Build or pull the function images using `make all-image` or `make pull`.
13+
### Invoke once
14+
2. Start the function with docker-compose
15+
```bash
16+
docker-compose -f ./yamls/docker-compose/dc-image-rotate-python.yaml up
17+
```
18+
3. In a new terminal, invoke the interface function with grpcurl.
19+
```bash
20+
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
21+
```
22+
### Invoke multiple times
23+
2. Run the invoker
24+
```bash
25+
# build the invoker binary
26+
cd ../../tools/invoker
27+
make invoker
28+
29+
# Specify the hostname through "endpoints.json"
30+
echo '[ { "hostname": "localhost" } ]' > endpoints.json
31+
32+
# Start the invoker with a chosen RPS rate and time
33+
./invoker -port 50000 -dbg -time 10 -rps 1
34+
```
35+
36+
## Running this benchmark (using knative)
37+
38+
The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the image-rotate-python function.
39+
1. Build or pull the function images using `make all-image` or `make pull`.
40+
2. Initialise the database and start the function with knative
41+
```bash
42+
kubectl apply -f ./yamls/knative/image-rotate-database.yaml
43+
kubectl apply -f ./yamls/knative/kn-image-rotate-python.yaml
44+
```
45+
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
46+
### Invoke once
47+
4. In a new terminal, invoke the interface function with test-client.
48+
```bash
49+
./test-client --addr $URL:80 --name "Example text for Image-rotate"
50+
```
51+
### Invoke multiple times
52+
4. Run the invoker
53+
```bash
54+
# build the invoker binary
55+
cd ../../tools/invoker
56+
make invoker
57+
58+
# Specify the hostname through "endpoints.json"
59+
echo '[ { "hostname": "$URL" } ]' > endpoints.json
60+
61+
# Start the invoker with a chosen RPS rate and time
62+
./invoker -port 80 -dbg -time 10 -rps 1
63+
```
64+
## Tracing
65+
66+
This Benchmark does not support distributed tracing for now.

benchmarks/rnn-serving/README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# RNN Serving Benchmark
2+
3+
The RNN serving benchmark generates a string using an RNN model given a specific language to generate the string in. An language can be specified as an input, but if nothing is given, a default language is chosen, either at random or uniquely using the input generator.
4+
5+
The functionality is implemented in Python. The function is invoked using gRPC.
6+
7+
## Running this benchmark locally (using docker)
8+
9+
The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the rnn-serving-python function.
10+
1. Build or pull the function images using `make all-image` or `make pull`.
11+
### Invoke once
12+
2. Start the function with docker-compose
13+
```bash
14+
docker-compose -f ./yamls/docker-compose/dc-rnn-serving-python.yaml up
15+
```
16+
3. In a new terminal, invoke the interface function with grpcurl.
17+
```bash
18+
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
19+
```
20+
### Invoke multiple times
21+
2. Run the invoker
22+
```bash
23+
# build the invoker binary
24+
cd ../../tools/invoker
25+
make invoker
26+
27+
# Specify the hostname through "endpoints.json"
28+
echo '[ { "hostname": "localhost" } ]' > endpoints.json
29+
30+
# Start the invoker with a chosen RPS rate and time
31+
./invoker -port 50000 -dbg -time 10 -rps 1
32+
```
33+
34+
## Running this benchmark (using knative)
35+
36+
The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the rnn-serving-python function.
37+
1. Build or pull the function images using `make all-image` or `make pull`.
38+
2. Initialise the database and start the function with knative
39+
```bash
40+
kubectl apply -f ./yamls/knative/kn-rnn-serving-python.yaml
41+
```
42+
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
43+
### Invoke once
44+
4. In a new terminal, invoke the interface function with test-client.
45+
```bash
46+
./test-client --addr $URL:80 --name "Example text for rnn-serving"
47+
```
48+
### Invoke multiple times
49+
4. Run the invoker
50+
```bash
51+
# build the invoker binary
52+
cd ../../tools/invoker
53+
make invoker
54+
55+
# Specify the hostname through "endpoints.json"
56+
echo '[ { "hostname": "$URL" } ]' > endpoints.json
57+
58+
# Start the invoker with a chosen RPS rate and time
59+
./invoker -port 80 -dbg -time 10 -rps 1
60+
```
61+
## Tracing
62+
63+
This Benchmark does not support distributed tracing for now.
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Video Analytics Standalone Benchmark
2+
3+
The video analyics standalone benchmark preprocesses an input video and runs an object detection model (squeezenet) on the video. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB.
4+
5+
The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder.
6+
7+
The functionality is implemented in Python. The function is invoked using gRPC.
8+
9+
## Running this benchmark locally (using docker)
10+
11+
The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-analytics-standalone-python function.
12+
1. Build or pull the function images using `make all-image` or `make pull`.
13+
### Invoke once
14+
2. Start the function with docker-compose
15+
```bash
16+
docker-compose -f ./yamls/docker-compose/dc-video-analytics-standalone-python.yaml up
17+
```
18+
3. In a new terminal, invoke the interface function with grpcurl.
19+
```bash
20+
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
21+
```
22+
### Invoke multiple times
23+
2. Run the invoker
24+
```bash
25+
# build the invoker binary
26+
cd ../../tools/invoker
27+
make invoker
28+
29+
# Specify the hostname through "endpoints.json"
30+
echo '[ { "hostname": "localhost" } ]' > endpoints.json
31+
32+
# Start the invoker with a chosen RPS rate and time
33+
./invoker -port 50000 -dbg -time 10 -rps 1
34+
```
35+
36+
## Running this benchmark (using knative)
37+
38+
The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-analytics-standalone-python function.
39+
1. Build or pull the function images using `make all-image` or `make pull`.
40+
2. Initialise the database and start the function with knative
41+
```bash
42+
kubectl apply -f ./yamls/knative/video-analytics-standalone-database.yaml
43+
kubectl apply -f ./yamls/knative/kn-video-analytics-standalone-python.yaml
44+
```
45+
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
46+
### Invoke once
47+
4. In a new terminal, invoke the interface function with test-client.
48+
```bash
49+
./test-client --addr $URL:80 --name "Example text for video analytics standalone"
50+
```
51+
### Invoke multiple times
52+
4. Run the invoker
53+
```bash
54+
# build the invoker binary
55+
cd ../../tools/invoker
56+
make invoker
57+
58+
# Specify the hostname through "endpoints.json"
59+
echo '[ { "hostname": "$URL" } ]' > endpoints.json
60+
61+
# Start the invoker with a chosen RPS rate and time
62+
./invoker -port 80 -dbg -time 10 -rps 1
63+
```
64+
## Tracing
65+
66+
This Benchmark does not support distributed tracing for now.
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Video Processing Benchmark
2+
3+
The video processing benchmark converts an input video to grayscale. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB.
4+
5+
The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder.
6+
7+
The functionality is implemented in Python. The function is invoked using gRPC.
8+
9+
## Running this benchmark locally (using docker)
10+
11+
The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-processing-python function.
12+
1. Build or pull the function images using `make all-image` or `make pull`.
13+
### Invoke once
14+
2. Start the function with docker-compose
15+
```bash
16+
docker-compose -f ./yamls/docker-compose/dc-video-processing-python.yaml up
17+
```
18+
3. In a new terminal, invoke the interface function with grpcurl.
19+
```bash
20+
./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello
21+
```
22+
### Invoke multiple times
23+
2. Run the invoker
24+
```bash
25+
# build the invoker binary
26+
cd ../../tools/invoker
27+
make invoker
28+
29+
# Specify the hostname through "endpoints.json"
30+
echo '[ { "hostname": "localhost" } ]' > endpoints.json
31+
32+
# Start the invoker with a chosen RPS rate and time
33+
./invoker -port 50000 -dbg -time 10 -rps 1
34+
```
35+
36+
## Running this benchmark (using knative)
37+
38+
The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-processing-python function.
39+
1. Build or pull the function images using `make all-image` or `make pull`.
40+
2. Initialise the database and start the function with knative
41+
```bash
42+
kubectl apply -f ./yamls/knative/video-processing-database.yaml
43+
kubectl apply -f ./yamls/knative/kn-video-processing-python.yaml
44+
```
45+
3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.**
46+
### Invoke once
47+
4. In a new terminal, invoke the interface function with test-client.
48+
```bash
49+
./test-client --addr $URL:80 --name "Example text for Video-processing"
50+
```
51+
### Invoke multiple times
52+
4. Run the invoker
53+
```bash
54+
# build the invoker binary
55+
cd ../../tools/invoker
56+
make invoker
57+
58+
# Specify the hostname through "endpoints.json"
59+
echo '[ { "hostname": "$URL" } ]' > endpoints.json
60+
61+
# Start the invoker with a chosen RPS rate and time
62+
./invoker -port 80 -dbg -time 10 -rps 1
63+
```
64+
## Tracing
65+
66+
This Benchmark does not support distributed tracing for now.

0 commit comments

Comments
 (0)