You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
7. Make sure the instances you created in the previous section are running. If not, start them now.
50
+
If you see this prompt, enter `N` (not `Y`!) to continue with the installation without modifying the existing installed dependencies.
51
+
52
+
53
+
54
+
4. Make sure the GCP VM instances you created in the previous section are running. If not, start them now.
55
+
56
+
{{% notice Note %}}
57
+
The install script will prompt you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running the script and/or get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
58
+
{{% /notice %}}
101
59
102
-
7. This script calls into the `gcloud` command to communicate with your running GCP instances. To ensure you are authenticated with GCP so these calls can be authenticated, run the following command to authenticate your local machine with GCP:
103
60
104
61
Continue on to the next section to run the script.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_run.md
+58-48Lines changed: 58 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,32 +6,20 @@ weight: 54
6
6
layout: learningpathall
7
7
---
8
8
9
-
With `rexec_sweet.py` installed, your benchmarking instances running, and your localhost authenticated with GCP, you can now run `rexec_sweet.py`.
9
+
With `rexec_sweet` installed, your benchmarking instances running, and your localhost authenticated with GCP, you can now run benchmarks in an automated fashion.
10
10
11
11
### Run an automated benchmark and analysis
12
12
13
-
1. Run the script:
14
-
13
+
1. Run the `rexec_sweet` script:
15
14
16
15
```bash
17
-
python rexec_sweet.py
16
+
rexec_sweet
18
17
```
19
-
2. The script will prompt you for the name of the benchmark you want to run. Press enter to run the default benchmark, which is `markdown` (this is the reccomended benchmark to run the first time.)
20
-
21
-
3. The script will call into GCP to detect all running VMs, and will prompt you to select the first VM you want to run the benchmark on. Select the first VM (which you installed sweet and benchstat on previously) and press enter.
22
-
23
-
4. The script will prompt you to select the path to sweet. If you followed the directions exactly, you can accept the default by hitting Enter, otherwise, choose the path manually, and press enter.
24
-
25
-
5. Repeat the process for the second VM. If you are only running two VMs, the script will automatically select the second VM for you.
26
18
27
-
Upon entering the info for the second VM, the script will automatically run the benchmark on both VMs, and then run `benchstat`to compare the results.
19
+
2. The script will prompt you for the name of the benchmark you want to run. Press enter to run the default benchmark, which is `markdown` (this is the recommended benchmark to run the first time.)
28
20
29
21
```output
30
-
$ python rexec_sweet.py
31
-
32
-
=== Benchmark Runner ===
33
-
34
-
Select a benchmark (default is markdown):
22
+
Available benchmarks:
35
23
1. biogo-igor
36
24
2. biogo-krishna
37
25
3. bleve-index
@@ -42,47 +30,69 @@ Select a benchmark (default is markdown):
42
30
8. gopher-lua
43
31
9. markdown (default)
44
32
10. tile38
45
-
Enter number (1-10) [9]:
33
+
Enter number (1-10) [default: markdown]:
34
+
```
46
35
47
-
--- System 1 ---
48
-
Please wait while fetching the instances list...
49
-
Select an instance:
50
-
1. c4-96 (default)
51
-
2. c4a-48
52
-
Enter number (1-2) [1]:
53
-
Remote directory [~/benchmarks/sweet]:
36
+
3. The script will call into GCP to detect all running VMs.
54
37
55
-
--- System 2 ---
56
-
Only one instance available: c4a-48. Selecting it by default.
57
-
Remote directory [~/benchmarks/sweet]:
38
+
```output
39
+
Available instances:
40
+
1. c4 (will be used as first instance)
41
+
2. c4a (will be used as second instance)
58
42
59
-
Running benchmarks on the selected instances...
60
-
[c4a-48] [sweet] Work directory: /tmp/gosweet1696486699
61
-
[c4a-48] [sweet] Benchmarks: markdown (10 runs)
62
-
[c4a-48] [sweet] Setting up benchmark: markdown
63
-
[c4-96] [sweet] Work directory: /tmp/gosweet2013611383
64
-
[c4-96] [sweet] Benchmarks: markdown (10 runs)
65
-
[c4-96] [sweet] Setting up benchmark: markdown
66
-
[c4a-48] [sweet] Running benchmark markdown for arm-benchmarks: run 1
67
-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 1
68
-
...
69
-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 9
70
-
[c4-96] [sweet] Running benchmark markdown for arm-benchmarks: run 10
71
-
[c4a-48] ✅ benchmark completed
72
-
[c4-96] ✅ benchmark completed
43
+
Do you want to run the first two instances found with default install directories? [Y/n]:
44
+
```
45
+
46
+
If you want to run benchmarks on the instances labeled with "will be used as nth instance", and you installed Go and Sweet into the default directories as noted in the tutorial, you can press Enter to accept the defaults.
47
+
48
+
Otherwise, if you want to run the benchmarks on instances that are not labeled "will be used as nth instance" and/or you installed Go and Sweet to folders different than instructed in the tutorial, select "n" and press Enter. The script will then prompt you to select the instances and runtime paths to run the benchmarks on.
49
+
50
+
In this example, we manually select the instances and paths:
51
+
52
+
```output
53
+
Available instances:
54
+
1. c4 (will be used as first instance)
55
+
2. c4a (will be used as second instance)
56
+
57
+
Do you want to run the first two instances found with default install directories? [Y/n]: n
58
+
59
+
Select FIRST instance:
60
+
Select an instance:
61
+
1. c4
62
+
2. c4a
63
+
Enter number (1-2): 1
64
+
Enter remote path for c4 [default: ~/benchmarks/sweet]:
65
+
66
+
Select SECOND instance:
67
+
Select an instance:
68
+
1. c4
69
+
2. c4a
70
+
Enter number (1-2): 2
71
+
Enter remote path for c4a [default: ~/benchmarks/sweet]:
Once the benchmarks are complete, the script will run `benchstat` to compare the results from both VMs:
74
+
75
+
Upon entering instance names and paths for the VMs, the script will automatically run the benchmark on both VMs, run `benchstat` to compare the results, and then push the results to your local machine.
75
76
76
77
```output
77
-
Created remote temp dir on c4a-48: /tmp/tmp.uGNVwNF0dl
78
+
Running benchmarks on the selected instances...
79
+
[c4a] [sweet] Work directory: /tmp/gosweet3216239593
80
+
[c4] [sweet] Work directory: /tmp/gosweet2073316306...
81
+
[c4a] ✅ benchmark completed
82
+
[c4] ✅ benchmark completed
78
83
...
79
-
Generated report at results/c4-96-c4a-48-markdown-20250603T172114/report.html
80
-
Report generated in results/c4-96-c4a-48-markdown-20250603T172114
84
+
Report generated in results/c4-c4a-markdown-20250610T190407
81
85
```
86
+
87
+
Once on your local machine, `rexec_sweet` will generate an HTML report that will open automatically in your web browser.
88
+
89
+
If you close the tab or browser, you can always reopen the report by navigating to the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, and opening report.html.
90
+
91
+

92
+
93
+
82
94
{{% notice Note %}}
83
95
If you see output messages from `rexec_sweet.py` similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These messages indicate that the benchmark sets differ between the two VMs, which is common when running benchmarks on different hardware or configurations.
84
96
{{% /notice %}}
85
97
86
98
Upon completion, the script will generate a report in the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, which opens automatically in your web browser to view the benchmark results and comparisons.
0 commit comments