Skip to content

Commit 020628d

Browse files
Merge pull request #2088 from jasonrandrews/review
review Go Sweet Learning Path
2 parents e2e08ca + 97bb398 commit 020628d

File tree

5 files changed

+49
-40
lines changed

5 files changed

+49
-40
lines changed

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/_index.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,15 @@ tools_software_languages:
2727
operatingsystems:
2828
- Linux
2929

30-
30+
further_reading:
31+
- resource:
32+
title: Effective Go
33+
link: https://go.dev/doc/effective_go#performance
34+
type: blog
35+
- resource:
36+
title: Benchmark testing in Go
37+
link: https://dev.to/stefanalfbo/benchmark-testing-in-go-17dc
38+
type: blog
3139

3240
### FIXED, DO NOT MODIFY
3341
# ================================================================================

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ In this Learning Path, you'll compare performance using two four-core GCP instan
3131

3232
{{% notice Note %}}
3333
Arm-based c4a-standard-4 instances and Intel-based c4-standard-8 instances both utilize four cores. Both instances are categorized by GCP as members a series that demonstrates consistent high performance.
34-
The main difference between the two is that c4a has 16 GB of RAM, while c4 has 30 GB of RAM. This Learning Path uses equivalent core counts to ensure a fair performance comparison.
34+
The main difference between the two is that c4a has 16 GB of RAM, while c4 has 30 GB of RAM. This Learning Path uses equivalent core counts as an example of performance comparison.
3535
{{% /notice %}}
3636

3737

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_install.md

Lines changed: 26 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -6,66 +6,65 @@ weight: 53
66
layout: learningpathall
77
---
88

9-
In the last section, you learned how to run benchmarks and Benchstat manually. Now you'll automate that process and generate visual reports using a script called `rexec_sweet.py`.
9+
In the last section, you learned how to run benchmarks and Benchstat manually. Now you'll automate that process and generate visual reports using a tools called `rexec_sweet`.
1010

11-
## What is rexec_sweet.py?
11+
## What is rexec_sweet?
1212

13-
`rexec_sweet.py` is a script that automates the benchmarking workflow: it connects to your GCP instances, runs benchmarks, collects results, and generates HTML reports - all in one step.
13+
`rexec_sweet` is a Python project available on GitHub that automates the benchmarking workflow. It connects to your GCP instances, runs benchmarks, collects results, and generates HTML reports - all in one step.
1414

1515
It provides several key benefits:
1616

1717
- **Automation**: Runs benchmarks on multiple VMs without manual SSH connections
1818
- **Consistency**: Ensures benchmarks are executed with identical parameters
1919
- **Visualization**: Generates HTML reports with interactive charts for easier analysis
2020

21-
Before running the script, ensure you've completed the "Install Go, Sweet, and Benchstat" step. All other dependencies are installed automatically by the setup script.
21+
Before running the tool, ensure you've completed the "Install Go, Sweet, and Benchstat" step. All other dependencies are installed automatically by the installer.
2222

2323
## Set up rexec_sweet
2424

25-
Follow the steps below to set up rexec_sweet.py.
25+
Follow the steps below to set up `rexec_sweet`.
2626

2727
### Create a working directory
2828

29-
On your local machine, open a terminal, then create and change into a directory to store the `rexec_sweet.py` script and related files:
29+
On your local machine, open a terminal, and create a new directory:
3030

31-
```bash
32-
mkdir rexec_sweet
33-
cd rexec_sweet
34-
```
31+
```bash
32+
mkdir rexec_sweet
33+
cd rexec_sweet
34+
```
3535

3636
### Clone the repository
3737

38-
Get the `rexec_sweet.py` script from the GitHub repository:
38+
Get `rexec_sweet` from GitHub:
3939

40-
```bash
41-
git clone https://github.com/geremyCohen/go_benchmarks.git
42-
cd go_benchmarks
43-
```
40+
```bash
41+
git clone https://github.com/geremyCohen/go_benchmarks.git
42+
cd go_benchmarks
43+
```
4444

4545
### Run the installer
4646

4747
Copy and paste this command into your terminal to run the installer:
4848

49-
```bash
50-
./install.sh
51-
```
49+
```bash
50+
./install.sh
51+
```
5252

53-
If the install.sh script detects that you already have dependencies installed, it might ask you if you want to reinstall them:
53+
If the installer detects that you already have dependencies installed, it might ask you if you want to reinstall them:
5454

55-
```output
56-
pyenv: /Users/gercoh01/.pyenv/versions/3.9.22 already exists
57-
continue with installation? (y/N)
58-
```
55+
```output
56+
pyenv: /Users/gercoh01/.pyenv/versions/3.9.22 already exists
57+
continue with installation? (y/N)
58+
```
5959

60-
If you see this prompt, enter `N` to continue with the installation without modifying the existing installed dependencies.
60+
If you see this prompt, enter `N` to continue with the installation without modifying the existing installed dependencies.
6161

6262
### Verify VM status
6363

6464
Make sure the GCP VM instances you created in the previous section are running. If not, start them now, and wait a few minutes for them to finish booting.
6565

6666
{{% notice Note %}}
67-
The install script prompts you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running the script and/or get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
67+
The installer prompts you to authenticate with Google Cloud Platform (GCP) using the gcloud command-line tool at the end of install. If after installing you have issues running or you get GCP authentication errors, you can manually authenticate with GCP by running the following command: `gcloud auth login`
6868
{{% /notice %}}
6969

70-
71-
Continue on to the next section to run the script and see how it simplifies the benchmarking process.
70+
Continue on to the next section to run the tool and see how it simplifies the benchmarking process.

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/rexec_sweet_run.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,18 @@ weight: 54
55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
8+
89
With `rexec_sweet` installed, your benchmarking instances running, and your local machine authenticated with GCP, you're ready to run automated benchmarks across your configured environments.
910

1011
## Run an automated benchmark and generate results
1112

1213
To begin, open a terminal on your local machine and run:
1314

1415
```bash
15-
rexec_sweet
16+
rexec-sweet
1617
```
17-
The script will prompt you to choose a benchmark.
18+
19+
The tool will prompt you to choose a benchmark.
1820

1921
Press **Enter** to run the default benchmark, markdown, which is a good starting point for your first run.
2022

@@ -33,7 +35,7 @@ Available benchmarks:
3335
Enter number (1-10) [default: markdown]:
3436
```
3537

36-
The script then detects your running GCP instances and displays them. You’ll be asked whether you want to use the first two instances it finds and the default install paths.
38+
The tool then detects your running GCP instances and displays them. You’ll be asked whether you want to use the first two instances it finds and the default install paths.
3739

3840
```output
3941
Available instances:
@@ -70,7 +72,7 @@ Output directory: /private/tmp/a/go_benchmarks/results/c4-c4a-markdown-20250610T
7072
...
7173
```
7274

73-
After selecting instances and paths, the script will:
75+
After selecting instances and paths, the tool will:
7476
- Run the selected benchmark on both VMs
7577
- Use `benchstat` to compare the results
7678
- Push the results to your local machine
@@ -89,13 +91,13 @@ Report generated in results/c4-c4a-markdown-20250610T190407
8991

9092
Once on your local machine, `rexec_sweet` will generate an HTML report that opens automatically in your web browser.
9193

92-
If you close the report, you can reopen it by navigating to the `results` subdirectory and opening report.html in your browser.
94+
If you close the report, you can reopen it by navigating to the `results` subdirectory and opening report.html in your browser.
9395

9496
![alt-text#center](images/run_auto/2.png "Sample HTML report")
9597

9698

9799
{{% notice Note %}}
98-
If you see output messages from `rexec_sweet.py` similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These warnings typically appear when benchmark sets differ slightly between the two VMs.
100+
If you see output messages similar to "geomeans may not be comparable" or "Dn: ratios must be >0 to compute geomean", this is expected and can be ignored. These warnings typically appear when benchmark sets differ slightly between the two VMs.
99101
{{% /notice %}}
100102

101-
Upon completion, the script generates a report in the `results` subdirectory of the current working directory of the `rexec_sweet.py` script, which opens automatically in your web browser to view the benchmark results and comparisons.
103+
Upon completion, the tool generates a report in the `results` subdirectory of the current working directory, which opens automatically in your web browser to view the benchmark results and comparisons.

content/learning-paths/servers-and-cloud-computing/go-benchmarking-with-sweet/running_benchmarks.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -98,15 +98,15 @@ Large gaps between average and peak memory usage suggest opportunities for memor
9898

9999
Here are some general tips to keep in mind as you explore benchmarking across different apps and instance types:
100100

101-
- Unlike Intel and AMD processors that use hyper-threading, Arm processors provide single-threaded cores without hyper-threading. A four-core Arm processor has four independent cores running four threads, while a four-core Intel processor provides eight logical cores through hyper-threading. This means that each Arm vCPU represents a full physical core, while each Intel/AMD vCPU represents half a physical core. For fair comparison, this Learning Path uses a 4-vCPU Arm instance against an 8-vCPU Intel instance. When scaling up instance sizes during benchmarking, make sure to keep a 2:1 Intel/AMD:Arm vCPU ratio if you wish to keep parity on CPU resources.
101+
- On Intel and AMD processors with hyper-threading, each vCPU corresponds to a logical core (hardware thread), and two vCPUs share a single physical core. On Arm processors (which do not use hyper-threading), each vCPU corresponds to a full physical core. For comparison, this Learning Path uses a 4-vCPU Arm instance against an 8-vCPU Intel instance, maintaining a 2:1 Intel:Arm vCPU ratio to keep parity on physical CPU resources.
102102

103-
- Run each benchmark at least 10 times (-count 10) to account for outliers and produce statistically meaningful results.
103+
- Run each benchmark at least 10 times to account for outliers and produce statistically meaningful results.
104+
105+
- Results can be bound by CPU, memory, or I/O performance. If you see significant differences in one metric but not others, it might indicate a bottleneck in that area; running the same benchmark with different configurations (for example, using more CPU cores or more memory) can help identify the bottleneck.
104106

105-
- Results can be bound by CPU, memory, or I/O performance. If you see significant differences in one metric but not others, it might indicate a bottleneck in that area; running the same benchmark with different configurations (for example, using more CPU cores or more memory) can help identify the bottleneck.
106107

107108

108109

109-
110110

111111

112112

0 commit comments

Comments
 (0)