Skip to content

Commit e9049de

Browse files
authored
Update benchmarking.md
1 parent d7e7f88 commit e9049de

File tree

1 file changed

+29
-28
lines changed
  • content/learning-paths/servers-and-cloud-computing/typescript-on-gcp

1 file changed

+29
-28
lines changed

content/learning-paths/servers-and-cloud-computing/typescript-on-gcp/benchmarking.md

Lines changed: 29 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,11 @@ layout: learningpathall
99

1010
## JMH-style Custom Benchmarking
1111

12-
This section demonstrates how to **benchmark TypeScript functions** using a JMH-style approach with Node.js `perf_hooks`. Unlike simple `console.time` timing, this method performs **repeated iterations**, calculates the **average execution time**, and provides more **reliable and stable performance measurements** on your Arm64 SUSE VM.
12+
This section demonstrates how to benchmark TypeScript functions using a JMH-style (Java Microbenchmark Harness) methodology implemented with Node.js’s built-in `perf_hooks` module.
13+
Unlike basic `console.time()` measurements, this approach executes multiple iterations, computes the average runtime, and produces stable and repeatable performance data, useful for evaluating workloads on your Google Cloud C4A (Axion Arm64) VM running SUSE Linux.
1314

1415
### Create the Benchmark Script
15-
Create a file named `benchmark_jmh.ts` in your project folder:
16+
Create a file named `benchmark_jmh.ts` inside your project directory with the content below:
1617

1718
```typescript
1819
import { performance } from 'perf_hooks';
@@ -43,30 +44,34 @@ for (let i = 0; i < iterations; i++) {
4344
const averageTime = totalTime / iterations;
4445
console.log(`\nAverage execution time over ${iterations} iterations: ${averageTime.toFixed(3)} ms`);
4546
```
47+
Code explanation:
4648

47-
- **`performance.now()`** → Provides a high-resolution timestamp in milliseconds for precise timing measurements.
48-
- **`sumArray`** → A sample CPU-bound function that sums numbers from 0 to `n`.
49-
- **`iterations`** → Defines how many times the benchmark should run to stabilize results and minimize random variations.
50-
- **`for` loop** → Executes the target function multiple times and records the duration of each run.
51-
- **`totalTime / iterations`** → Calculates the **average execution time** across all runs, similar to how **JMH (Java Microbenchmark Harness)** operates in Java.
49+
| Component | Description |
50+
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
51+
| **`performance.now()`** | Provides high-resolution timestamps (sub-millisecond precision) for accurate timing. |
52+
| **`sumArray(n)`** | A simple CPU-bound function that sums integers from 0 to `n`. This simulates a computational workload suitable for benchmarking raw arithmetic throughput. |
53+
| **`iterations`** | Defines how many times the test runs. Multiple repetitions reduce noise and help average out one-off delays or GC pauses. |
54+
| **Loop and averaging** | Each run’s duration is recorded; the mean execution time is then reported, mirroring how JMH computes stable results in Java microbenchmarks. |
5255

53-
This JMH-style benchmarking approach provides **more accurate and repeatable performance metrics** than a single execution, making it ideal for performance testing on Arm-based systems.
56+
57+
This JMH-style benchmarking approach provides more accurate and repeatable performance metrics than a single execution, making it ideal for performance testing on Arm-based systems.
5458

5559
### Compile the TypeScript Benchmark
56-
Compile the TypeScript benchmark file into JavaScript:
60+
First, compile the benchmark file from TypeScript to JavaScript using the TypeScript compiler (tsc):
5761

5862
```console
5963
tsc benchmark_jmh.ts
6064
```
61-
This generates a `benchmark_jmh.js` file that can be executed by Node.js.
65+
This command transpiles your TypeScript code into standard JavaScript, generating a file named `benchmark_jmh.js` in the same directory.
66+
The resulting JavaScript can be executed by Node.js, allowing you to measure performance on your Google Cloud C4A (Arm64) virtual machine.
6267

6368
### Run the Benchmark
64-
Execute the compiled JavaScript file:
69+
Now, execute the compiled JavaScript file with Node.js:
6570

6671
```console
6772
node benchmark_jmh.js
6873
```
69-
You should see an output similar to:
74+
You should see output similar to:
7075

7176
```output
7277
Iteration 1: 2.286 ms
@@ -85,21 +90,16 @@ Average execution time over 10 iterations: 0.888 ms
8590

8691
### Benchmark Metrics Explained
8792

88-
- **Iteration times** → Each iteration shows the **time taken for a single execution** of the function being benchmarked.
89-
- **Average execution time** → Calculated as the sum of all iteration times divided by the number of iterations. This provides a **stable measure of typical performance**.
90-
- **Why multiple iterations?**
91-
- Single-run timing can be inconsistent due to factors such as CPU scheduling, memory allocation, or caching.
92-
- Repeating the benchmark multiple times and averaging reduces variability and gives **more reliable performance results**, similar to Java’s JMH benchmarking approach.
93-
**Interpretation:**
94-
- The average execution time reflects how efficient the function is under normal conditions.
95-
- Initial iterations may take longer due to **initialization overhead**, which is common in Node.js performance tests.
93+
* Iteration times → Each iteration represents the time taken for one complete execution of the benchmarked function.
94+
* Average execution time → Calculated as the total of all iteration times divided by the number of iterations. This gives a stable measure of real-world performance.
95+
* Why multiple iterations?
96+
A single run can be affected by transient factors such as CPU scheduling, garbage collection, or memory caching.
97+
Running multiple iterations and averaging the results smooths out variability, producing more repeatable and statistically meaningful data, similar to Java’s JMH benchmarking methodology.
98+
99+
### Interpretation
96100

97-
### Benchmark summary on x86_64
98-
To compare the benchmark results, the following results were collected by running the same benchmark on a `x86 - c4-standard-4` (4 vCPUs, 15 GB Memory) x86_64 VM in GCP, running SUSE:
99-
100-
| Iteration | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Average |
101-
|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|---------|
102-
| Time (ms) | 3.217 | 0.631 | 0.632 | 0.611 | 0.612 | 0.614 | 0.614 | 0.611 | 0.606 | 0.532 | 0.868 |
101+
The average execution time reflects how efficiently the function executes under steady-state conditions.
102+
The first iteration often shows higher latency because Node.js performing initial JIT (Just-In-Time) compilation and optimization, a common warm-up behavior in JavaScript/TypeScript benchmarks.
103103

104104
### Benchmark summary on Arm64
105105
Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE):
@@ -110,9 +110,10 @@ Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm6
110110

111111
### TypeScript performance benchmarking comparison on Arm64 and x86_64
112112

113-
When you compare the benchmarking results, you will notice that on the Google Axion C4A Arm-based instances:
113+
When you look at the benchmarking results, you will notice that on the Google Axion C4A Arm-based instances:
114114

115115
- The average execution time on Arm64 (~0.888 ms) shows that CPU-bound TypeScript operations run efficiently on Arm-based VMs.
116116
- Initial iterations may show slightly higher times due to runtime warm-up and optimization overhead, which is common across architectures.
117117
- Arm64 demonstrates stable iteration times after the first run, indicating consistent performance for repeated workloads.
118-
- Compared to typical x86_64 VMs, Arm64 performance is comparable for lightweight TypeScript computations, with potential advantages in power efficiency and cost for cloud deployments.
118+
119+
This demonstrates that Google Cloud C4A Arm64 virtual machines provide production-grade stability and throughput for TypeScript workloads, whether used for application logic, scripting, or performance-critical services.

0 commit comments

Comments
 (0)