-
Notifications
You must be signed in to change notification settings - Fork 14
Porting a Client Server Application
Here we will demonstrate how to port a client/server application to Loupe. Before following this guide it is probably a good idea to understand how to port a run-to-completion application first. This guide assumes you have downloaded and set up loupe/loupedb as described here.
We will use the Valkey key-value store as example application. It's a server, so we also need a client workload to stress test it: for that we will use the valkey-benchmark.
valkey.dockerfile looks like that:
FROM loupe-base:latest
# Install dependencies for building valkey from sources
RUN apt update
RUN apt install -y build-essential git pkg-config
# Clone valkey's repo:
RUN git clone https://github.com/valkey-io/valkey.git
# Build valkey
RUN cd valkey && make -j
# Copy the test script in the container
COPY dockerfile_data/valkey-test.py /root/valkey-test.py
RUN chmod a+x /root/valkey-test.py
# And finally the invocation of Loupe within the container it will run in:
CMD /root/explore.py --output-csv -t /root/valkey-test.py --timeout 30 \
-b /root/valkey/src/valkey-server --disable-staticIt is quite similar to the one used for the run-to-completion test: the image is created from loupe-base, and after downloading a few dependencies we clone Valkey's repository and build it from sources.
We then copy the test script (detailed below) within the image.
The invocation of explore.py is slightly different from what we have with run-to-completion applications: the --test-sequential is not present, indicating that we are working in client/server mode.
This will change the way Loupe manage the application under test (/root/valkey/src/valkey-server): for each test, after the server is spawned, it will be kept alive while the test script (executing the client workload) is run.
When successful, the valkey-benchmark outputs, among other things, the following:
Summary:
throughput summary: 344827.59 requests per second
Our test file will match the number of requests per second and verify it is superior to 0 to validate the success/failure of a run.
This is the test file, dockerfile_data/valkey-test.py:
#!/usr/bin/python3
import re, subprocess
regex=re.compile("throughput summary:\s+([0-9.]+)\s+requests per second") # Used ot match the throughput
bench_cmd = ["/root/valkey/src/valkey-benchmark", "-t", "get"] # Benchmark only GET
output = subprocess.check_output(bench_cmd).decode("utf-8")
# If our regular expression matches, grab the trhoughput and check that it's > 0
match = re.search(regex, output)
if match:
throughput = float(match.groups()[0])
if throughput > 0:
exit(0) # Throughput is positive, test suceeded
exit(1) # Couldn't match the throughput regex, or it's not positive, test failedThe test file works a bit differently compared to what we have with run-to-completion applications.
As mentioned previously, this time it runs concurrently with the server under test, valkey-server.
It is the test file's responsibility to execute the client workload: here this is achieved by invoking valkey-benchmark and grabbing the output of the command within a variable using the subprocess.check_output function.
Next we apply the same matching method to the output, and return 0 if the test is a success (throughput is positive), and 1 if not (throughput could not be matched or is not positive).
Loupe can then be invoked as follows to launch the exploration:
./loupe generate -b -db ../loupedb -a "valkey" -w "get" -d ./valkey.dockerfile
This should take a few minutes.
Once done, the results will be in loupedb/valkey/benchmark-get/<unique id>/data/dyn.csv.