Skip to content

Commit 3da5263

Browse files
committed
Finished final chapter
1 parent 51dd7c2 commit 3da5263

File tree

3 files changed

+81
-114
lines changed

3 files changed

+81
-114
lines changed
86.9 KB
Loading

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_install.md

Lines changed: 23 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -25,80 +25,37 @@ cd rexec_sweet
2525
2. Clone the `rexec_sweet.py` script from the GitHub repository:
2626

2727
```bash
28-
git clone https://github.com/geremyCohen/go_benchmarks.git
28+
git clone https://github.com/geremyCohen/go_benchmarks.git
29+
cd go_benchmarks
2930
```
3031

31-
3. Copy and paste this code into your terminal to create the `rexec_sweet.py` file:
32+
3. Copy and paste this code into your terminal to run the installer for `rexec_sweet.py`:
3233

3334
```bash
34-
# Detect OS
35-
OS=$(uname -s)
36-
37-
# Install based on detected OS
38-
if [ "$OS" = "Darwin" ]; then
39-
echo "Detected macOS, installing with Homebrew..."
40-
41-
# Update Homebrew
42-
brew update
43-
44-
# Check and install required packages
45-
for package in pyenv virtualenv pyenv-virtualenv; do
46-
if which $package &>/dev/null || brew list $package &>/dev/null; then
47-
echo "$package is already installed"
48-
else
49-
echo "Installing $package..."
50-
brew install $package
51-
fi
52-
done
53-
54-
elif [ "$OS" = "Linux" ]; then
55-
echo "Detected Linux, installing with apt-get..."
56-
57-
# Update package lists
58-
sudo apt-get -y update
59-
60-
# Install dependencies
61-
sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev git
62-
63-
# Check if pyenv is already installed
64-
if which pyenv &>/dev/null; then
65-
echo "pyenv is already installed"
66-
else
67-
echo "Installing pyenv..."
68-
curl https://pyenv.run | bash
69-
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
70-
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
71-
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
72-
source ~/.bashrc
73-
fi
74-
75-
else
76-
echo "Unsupported operating system: $OS"
77-
echo "Please install pyenv manually for your system."
78-
exit 1
79-
fi
80-
81-
82-
# Install Python 3.9.22
83-
pyenv install 3.9.22
84-
85-
# Create a virtualenv for this project
86-
pyenv virtualenv 3.9.22 rexec-sweet-env
87-
88-
# Clone the repository and set the local pyenv version
89-
git clone https://github.com/geremyCohen/go_benchmarks.git
90-
cd go_benchmarks
91-
pyenv local rexec-sweet-env
92-
93-
# Install from the project directory
94-
pip install -e .
35+
./install.sh
36+
```
9537

38+
If the install.sh script detects that you already have dependencies installed, it may ask you if you wish to reinstall them with the following prompt:
9639

97-
gcloud auth login
40+
```output
41+
$ ./install.sh
42+
...
43+
pyenv is already installed
44+
virtualenv is already installed
45+
pyenv-virtualenv is already installed
46+
pyenv: /Users/gercoh01/.pyenv/versions/3.9.22 already exists
47+
continue with installation? (y/N)
9848
```
9949

100-
7. Make sure the instances you created in the previous section are running. If not, start them now.
50+
If you see this prompt, enter `N` (not `Y`!) to continue with the installation without modifying the existing installed dependencies.
51+
52+
53+
54+
4. Make sure the GCP VM instances you created in the previous section are running. If not, start them now.
55+
56+
{{% notice Note %}}
57+
The install script will prompt you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running the script and/or get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
58+
{{% /notice %}}
10159

102-
7. This script calls into the `gcloud` command to communicate with your running GCP instances. To ensure you are authenticated with GCP so these calls can be authenticated, run the following command to authenticate your local machine with GCP:
10360

10461
Continue on to the next section to run the script.

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_run.md

Lines changed: 58 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -6,32 +6,20 @@ weight: 54
66
layout: learningpathall
77
---
88

9-
With `rexec_sweet.py` installed, your benchmarking instances running, and your localhost authenticated with GCP, you can now run `rexec_sweet.py`.
9+
With `rexec_sweet` installed, your benchmarking instances running, and your localhost authenticated with GCP, you can now run benchmarks in an automated fashion.
1010

1111
### Run an automated benchmark and analysis
1212

13-
1. Run the script:
14-
13+
1. Run the `rexec_sweet` script:
1514

1615
```bash
17-
python rexec_sweet.py
16+
rexec_sweet
1817
```
19-
2. The script will prompt you for the name of the benchmark you want to run. Press enter to run the default benchmark, which is `markdown` (this is the reccomended benchmark to run the first time.)
20-
21-
3. The script will call into GCP to detect all running VMs, and will prompt you to select the first VM you want to run the benchmark on. Select the first VM (which you installed sweet and benchstat on previously) and press enter.
22-
23-
4. The script will prompt you to select the path to sweet. If you followed the directions exactly, you can accept the default by hitting Enter, otherwise, choose the path manually, and press enter.
24-
25-
5. Repeat the process for the second VM. If you are only running two VMs, the script will automatically select the second VM for you.
2618

27-
Upon entering the info for the second VM, the script will automatically run the benchmark on both VMs, and then run `benchstat` to compare the results.
19+
2. The script will prompt you for the name of the benchmark you want to run. Press enter to run the default benchmark, which is `markdown` (this is the recommended benchmark to run the first time.)
2820

2921
```output
30-
$ python rexec_sweet.py
31-
32-
=== Benchmark Runner ===
33-
34-
Select a benchmark (default is markdown):
22+
Available benchmarks:
3523
1. biogo-igor
3624
2. biogo-krishna
3725
3. bleve-index
@@ -42,47 +30,69 @@ Select a benchmark (default is markdown):
4230
8. gopher-lua
4331
9. markdown (default)
4432
10. tile38
45-
Enter number (1-10) [9]:
33+
Enter number (1-10) [default: markdown]:
34+
```
4635

47-
--- System 1 ---
48-
Please wait while fetching the instances list...
49-
Select an instance:
50-
1. c4-96 (default)
51-
2. c4a-48
52-
Enter number (1-2) [1]:
53-
Remote directory [~/benchmarks/sweet]:
36+
3. The script will call into GCP to detect all running VMs.
5437

55-
--- System 2 ---
56-
Only one instance available: c4a-48. Selecting it by default.
57-
Remote directory [~/benchmarks/sweet]:
38+
```output
39+
Available instances:
40+
1. c4 (will be used as first instance)
41+
2. c4a (will be used as second instance)
5842
59-
Running benchmarks on the selected instances...
60-
[c4a-48] [sweet] Work directory: /tmp/gosweet1696486699
61-
[c4a-48] [sweet] Benchmarks: markdown (10 runs)
62-
[c4a-48] [sweet] Setting up benchmark: markdown
63-
[c4-96] [sweet] Work directory: /tmp/gosweet2013611383
64-
[c4-96] [sweet] Benchmarks: markdown (10 runs)
65-
[c4-96] [sweet] Setting up benchmark: markdown
66-
[c4a-48] [sweet] Running benchmark markdown for arm-benchmarks: run 1
67-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 1
68-
...
69-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 9
70-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 10
71-
[c4a-48] ✅ benchmark completed
72-
[c4-96] ✅ benchmark completed
43+
Do you want to run the first two instances found with default install directories? [Y/n]:
44+
```
45+
46+
If you want to run benchmarks on the instances labeled with "will be used as nth instance", and you installed Go and Sweet into the default directories as noted in the tutorial, you can press Enter to accept the defaults.
47+
48+
Otherwise, if you want to run the benchmarks on instances that are not labeled "will be used as nth instance" and/or you installed Go and Sweet to folders different than instructed in the tutorial, select "n" and press Enter. The script will then prompt you to select the instances and runtime paths to run the benchmarks on.
49+
50+
In this example, we manually select the instances and paths:
51+
52+
```output
53+
Available instances:
54+
1. c4 (will be used as first instance)
55+
2. c4a (will be used as second instance)
56+
57+
Do you want to run the first two instances found with default install directories? [Y/n]: n
58+
59+
Select FIRST instance:
60+
Select an instance:
61+
1. c4
62+
2. c4a
63+
Enter number (1-2): 1
64+
Enter remote path for c4 [default: ~/benchmarks/sweet]:
65+
66+
Select SECOND instance:
67+
Select an instance:
68+
1. c4
69+
2. c4a
70+
Enter number (1-2): 2
71+
Enter remote path for c4a [default: ~/benchmarks/sweet]:
72+
Output directory: /private/tmp/a/go_benchmarks/results/c4-c4a-markdown-20250610T190407
7373
```
74-
Once the benchmarks are complete, the script will run `benchstat` to compare the results from both VMs:
74+
75+
Upon entering instance names and paths for the VMs, the script will automatically run the benchmark on both VMs, run `benchstat` to compare the results, and then push the results to your local machine.
7576

7677
```output
77-
Created remote temp dir on c4a-48: /tmp/tmp.uGNVwNF0dl
78+
Running benchmarks on the selected instances...
79+
[c4a] [sweet] Work directory: /tmp/gosweet3216239593
80+
[c4] [sweet] Work directory: /tmp/gosweet2073316306...
81+
[c4a] ✅ benchmark completed
82+
[c4] ✅ benchmark completed
7883
...
79-
Generated report at results/c4-96-c4a-48-markdown-20250603T172114/report.html
80-
Report generated in results/c4-96-c4a-48-markdown-20250603T172114
84+
Report generated in results/c4-c4a-markdown-20250610T190407
8185
```
86+
87+
Once on your local machine, `rexec_sweet` will generate an HTML report that will open automatically in your web browser.
88+
89+
If you close the tab or browser, you can always reopen the report by navigating to the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, and opening report.html.
90+
91+
![](images/run_auto/2.png)
92+
93+
8294
{{% notice Note %}}
8395
If you see output messages from `rexec_sweet.py` similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These messages indicate that the benchmark sets differ between the two VMs, which is common when running benchmarks on different hardware or configurations.
8496
{{% /notice %}}
8597

8698
Upon completion, the script will generate a report in the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, which opens automatically in your web browser to view the benchmark results and comparisons.
87-
88-
![](images/run_auto/1.png)

0 commit comments

Comments
 (0)