You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ In this Learning Path, you'll compare performance using two four-core GCP instan
31
31
32
32
{{% notice Note %}}
33
33
Arm-based c4a-standard-4 instances and Intel-based c4-standard-8 instances both utilize four cores. Both instances are categorized by GCP as members a series that demonstrates consistent high performance.
34
-
The main difference between the two is that c4a has 16 GB of RAM, while c4 has 30 GB of RAM. This Learning Path uses equivalent core counts to ensure a fair performance comparison.
34
+
The main difference between the two is that c4a has 16 GB of RAM, while c4 has 30 GB of RAM. This Learning Path uses equivalent core counts as an example of performance comparison.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_install.md
+26-27Lines changed: 26 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,66 +6,65 @@ weight: 53
6
6
layout: learningpathall
7
7
---
8
8
9
-
In the last section, you learned how to run benchmarks and Benchstat manually. Now you'll automate that process and generate visual reports using a script called `rexec_sweet.py`.
9
+
In the last section, you learned how to run benchmarks and Benchstat manually. Now you'll automate that process and generate visual reports using a tools called `rexec_sweet`.
10
10
11
-
## What is rexec_sweet.py?
11
+
## What is rexec_sweet?
12
12
13
-
`rexec_sweet.py` is a script that automates the benchmarking workflow: it connects to your GCP instances, runs benchmarks, collects results, and generates HTML reports - all in one step.
13
+
`rexec_sweet` is a Python project available on GitHub that automates the benchmarking workflow. It connects to your GCP instances, runs benchmarks, collects results, and generates HTML reports - all in one step.
14
14
15
15
It provides several key benefits:
16
16
17
17
-**Automation**: Runs benchmarks on multiple VMs without manual SSH connections
18
18
-**Consistency**: Ensures benchmarks are executed with identical parameters
19
19
-**Visualization**: Generates HTML reports with interactive charts for easier analysis
20
20
21
-
Before running the script, ensure you've completed the "Install Go, Sweet, and Benchstat" step. All other dependencies are installed automatically by the setup script.
21
+
Before running the tool, ensure you've completed the "Install Go, Sweet, and Benchstat" step. All other dependencies are installed automatically by the installer.
22
22
23
23
## Set up rexec_sweet
24
24
25
-
Follow the steps below to set up rexec_sweet.py.
25
+
Follow the steps below to set up `rexec_sweet`.
26
26
27
27
### Create a working directory
28
28
29
-
On your local machine, open a terminal, then create and change into a directory to store the `rexec_sweet.py` script and related files:
29
+
On your local machine, open a terminal, and create a new directory:
30
30
31
-
```bash
32
-
mkdir rexec_sweet
33
-
cd rexec_sweet
34
-
```
31
+
```bash
32
+
mkdir rexec_sweet
33
+
cd rexec_sweet
34
+
```
35
35
36
36
### Clone the repository
37
37
38
-
Get the `rexec_sweet.py` script from the GitHub repository:
If you see this prompt, enter `N` to continue with the installation without modifying the existing installed dependencies.
60
+
If you see this prompt, enter `N` to continue with the installation without modifying the existing installed dependencies.
61
61
62
62
### Verify VM status
63
63
64
64
Make sure the GCP VM instances you created in the previous section are running. If not, start them now, and wait a few minutes for them to finish booting.
65
65
66
66
{{% notice Note %}}
67
-
The install script prompts you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running the script and/or get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
67
+
The installer prompts you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running or you get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
68
68
{{% /notice %}}
69
69
70
-
71
-
Continue on to the next section to run the script and see how it simplifies the benchmarking process.
70
+
Continue on to the next section to run the tool and see how it simplifies the benchmarking process.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_run.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,16 +5,18 @@ weight: 54
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
+
8
9
With `rexec_sweet` installed, your benchmarking instances running, and your local machine authenticated with GCP, you're ready to run automated benchmarks across your configured environments.
9
10
10
11
## Run an automated benchmark and generate results
11
12
12
13
To begin, open a terminal on your local machine and run:
13
14
14
15
```bash
15
-
rexec_sweet
16
+
rexec-sweet
16
17
```
17
-
The script will prompt you to choose a benchmark.
18
+
19
+
The tool will prompt you to choose a benchmark.
18
20
19
21
Press **Enter** to run the default benchmark, markdown, which is a good starting point for your first run.
20
22
@@ -33,7 +35,7 @@ Available benchmarks:
33
35
Enter number (1-10) [default: markdown]:
34
36
```
35
37
36
-
The script then detects your running GCP instances and displays them. You’ll be asked whether you want to use the first two instances it finds and the default install paths.
38
+
The tool then detects your running GCP instances and displays them. You’ll be asked whether you want to use the first two instances it finds and the default install paths.
After selecting instances and paths, the script will:
75
+
After selecting instances and paths, the tool will:
74
76
- Run the selected benchmark on both VMs
75
77
- Use `benchstat` to compare the results
76
78
- Push the results to your local machine
@@ -89,13 +91,13 @@ Report generated in results/c4-c4a-markdown-20250610T190407
89
91
90
92
Once on your local machine, `rexec_sweet` will generate an HTML report that opens automatically in your web browser.
91
93
92
-
If you close the report, you can reopen it by navigating to the `results` subdirectory and opening report.html in your browser.
94
+
If you close the report, you can reopen it by navigating to the `results` subdirectory and opening report.html in your browser.
93
95
94
96

95
97
96
98
97
99
{{% notice Note %}}
98
-
If you see output messages from `rexec_sweet.py`similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These warnings typically appear when benchmark sets differ slightly between the two VMs.
100
+
If you see output messages similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These warnings typically appear when benchmark sets differ slightly between the two VMs.
99
101
{{% /notice %}}
100
102
101
-
Upon completion, the script generates a report in the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, which opens automatically in your web browser to view the benchmark results and comparisons.
103
+
Upon completion, the tool generates a report in the `results` subdirectory of the current working directory, which opens automatically in your web browser to view the benchmark results and comparisons.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/running_benchmarks.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,15 +98,15 @@ Large gaps between average and peak memory usage suggest opportunities for memor
98
98
99
99
Here are some general tips to keep in mind as you explore benchmarking across different apps and instance types:
100
100
101
-
-Unlike Intel and AMD processors that use hyper-threading, Arm processors provide single-threaded cores without hyper-threading. A four-core Arm processor has four independent cores running four threads, while a four-core Intel processor provides eight logical cores through hyper-threading. This means that each Arm vCPU represents a full physical core, while each Intel/AMD vCPU represents half a physical core. For fair comparison, this Learning Path uses a 4-vCPU Arm instance against an 8-vCPU Intel instance. When scaling up instance sizes during benchmarking, make sure to keep a 2:1 Intel/AMD:Arm vCPU ratio if you wish to keep parity on CPU resources.
101
+
-On Intel and AMD processors with hyper-threading, each vCPU corresponds to a logical core (hardware thread), and two vCPUs share a single physical core. On Arm processors (which do not use hyper-threading), each vCPU corresponds to a full physical core. For comparison, this Learning Path uses a 4-vCPU Arm instance against an 8-vCPU Intel instance, maintaining a 2:1 Intel:Arm vCPU ratio to keep parity on physical CPU resources.
102
102
103
-
- Run each benchmark at least 10 times (-count 10) to account for outliers and produce statistically meaningful results.
103
+
- Run each benchmark at least 10 times to account for outliers and produce statistically meaningful results.
104
+
105
+
- Results can be bound by CPU, memory, or I/O performance. If you see significant differences in one metric but not others, it might indicate a bottleneck in that area; running the same benchmark with different configurations (for example, using more CPU cores or more memory) can help identify the bottleneck.
104
106
105
-
- Results can be bound by CPU, memory, or I/O performance. If you see significant differences in one metric but not others, it might indicate a bottleneck in that area; running the same benchmark with different configurations (for example, using more CPU cores or more memory) can help identify the bottleneck.
0 commit comments