You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/typescript-on-gcp/benchmarking.md
+29-28Lines changed: 29 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,11 @@ layout: learningpathall
9
9
10
10
## JMH-style Custom Benchmarking
11
11
12
-
This section demonstrates how to **benchmark TypeScript functions** using a JMH-style approach with Node.js `perf_hooks`. Unlike simple `console.time` timing, this method performs **repeated iterations**, calculates the **average execution time**, and provides more **reliable and stable performance measurements** on your Arm64 SUSE VM.
12
+
This section demonstrates how to benchmark TypeScript functions using a JMH-style (Java Microbenchmark Harness) methodology implemented with Node.js’s built-in `perf_hooks` module.
13
+
Unlike basic `console.time()` measurements, this approach executes multiple iterations, computes the average runtime, and produces stable and repeatable performance data, useful for evaluating workloads on your Google Cloud C4A (Axion Arm64) VM running SUSE Linux.
13
14
14
15
### Create the Benchmark Script
15
-
Create a file named `benchmark_jmh.ts`in your project folder:
16
+
Create a file named `benchmark_jmh.ts`inside your project directory with the content below:
16
17
17
18
```typescript
18
19
import { performance } from'perf_hooks';
@@ -43,30 +44,34 @@ for (let i = 0; i < iterations; i++) {
43
44
const averageTime =totalTime/iterations;
44
45
console.log(`\nAverage execution time over ${iterations} iterations: ${averageTime.toFixed(3)} ms`);
45
46
```
47
+
Code explanation:
46
48
47
-
-**`performance.now()`** → Provides a high-resolution timestamp in milliseconds for precise timing measurements.
48
-
-**`sumArray`** → A sample CPU-bound function that sums numbers from 0 to `n`.
49
-
-**`iterations`** → Defines how many times the benchmark should run to stabilize results and minimize random variations.
50
-
-**`for` loop** → Executes the target function multiple times and records the duration of each run.
51
-
-**`totalTime / iterations`** → Calculates the **average execution time** across all runs, similar to how **JMH (Java Microbenchmark Harness)** operates in Java.
|**`performance.now()`**| Provides high-resolution timestamps (sub-millisecond precision) for accurate timing. |
52
+
|**`sumArray(n)`**| A simple CPU-bound function that sums integers from 0 to `n`. This simulates a computational workload suitable for benchmarking raw arithmetic throughput. |
53
+
|**`iterations`**| Defines how many times the test runs. Multiple repetitions reduce noise and help average out one-off delays or GC pauses. |
54
+
|**Loop and averaging**| Each run’s duration is recorded; the mean execution time is then reported, mirroring how JMH computes stable results in Java microbenchmarks. |
52
55
53
-
This JMH-style benchmarking approach provides **more accurate and repeatable performance metrics** than a single execution, making it ideal for performance testing on Arm-based systems.
56
+
57
+
This JMH-style benchmarking approach provides more accurate and repeatable performance metrics than a single execution, making it ideal for performance testing on Arm-based systems.
54
58
55
59
### Compile the TypeScript Benchmark
56
-
Compile the TypeScript benchmark file into JavaScript:
60
+
First, compile the benchmark file from TypeScript to JavaScript using the TypeScript compiler (tsc):
57
61
58
62
```console
59
63
tsc benchmark_jmh.ts
60
64
```
61
-
This generates a `benchmark_jmh.js` file that can be executed by Node.js.
65
+
This command transpiles your TypeScript code into standard JavaScript, generating a file named `benchmark_jmh.js` in the same directory.
66
+
The resulting JavaScript can be executed by Node.js, allowing you to measure performance on your Google Cloud C4A (Arm64) virtual machine.
62
67
63
68
### Run the Benchmark
64
-
Execute the compiled JavaScript file:
69
+
Now, execute the compiled JavaScript file with Node.js:
65
70
66
71
```console
67
72
node benchmark_jmh.js
68
73
```
69
-
You should see an output similar to:
74
+
You should see output similar to:
70
75
71
76
```output
72
77
Iteration 1: 2.286 ms
@@ -85,21 +90,16 @@ Average execution time over 10 iterations: 0.888 ms
85
90
86
91
### Benchmark Metrics Explained
87
92
88
-
-**Iteration times** → Each iteration shows the **time taken for a single execution** of the function being benchmarked.
89
-
-**Average execution time** → Calculated as the sum of all iteration times divided by the number of iterations. This provides a **stable measure of typical performance**.
90
-
-**Why multiple iterations?**
91
-
- Single-run timing can be inconsistent due to factors such as CPU scheduling, memory allocation, or caching.
92
-
- Repeating the benchmark multiple times and averaging reduces variability and gives **more reliable performance results**, similar to Java’s JMH benchmarking approach.
93
-
**Interpretation:**
94
-
- The average execution time reflects how efficient the function is under normal conditions.
95
-
- Initial iterations may take longer due to **initialization overhead**, which is common in Node.js performance tests.
93
+
* Iteration times → Each iteration represents the time taken for one complete execution of the benchmarked function.
94
+
* Average execution time → Calculated as the total of all iteration times divided by the number of iterations. This gives a stable measure of real-world performance.
95
+
* Why multiple iterations?
96
+
A single run can be affected by transient factors such as CPU scheduling, garbage collection, or memory caching.
97
+
Running multiple iterations and averaging the results smooths out variability, producing more repeatable and statistically meaningful data, similar to Java’s JMH benchmarking methodology.
98
+
99
+
### Interpretation
96
100
97
-
### Benchmark summary on x86_64
98
-
To compare the benchmark results, the following results were collected by running the same benchmark on a `x86 - c4-standard-4` (4 vCPUs, 15 GB Memory) x86_64 VM in GCP, running SUSE:
The average execution time reflects how efficiently the function executes under steady-state conditions.
102
+
The first iteration often shows higher latency because Node.js performing initial JIT (Just-In-Time) compilation and optimization, a common warm-up behavior in JavaScript/TypeScript benchmarks.
103
103
104
104
### Benchmark summary on Arm64
105
105
Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE):
@@ -110,9 +110,10 @@ Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm6
110
110
111
111
### TypeScript performance benchmarking comparison on Arm64 and x86_64
112
112
113
-
When you compare the benchmarking results, you will notice that on the Google Axion C4A Arm-based instances:
113
+
When you look at the benchmarking results, you will notice that on the Google Axion C4A Arm-based instances:
114
114
115
115
- The average execution time on Arm64 (~0.888 ms) shows that CPU-bound TypeScript operations run efficiently on Arm-based VMs.
116
116
- Initial iterations may show slightly higher times due to runtime warm-up and optimization overhead, which is common across architectures.
117
117
- Arm64 demonstrates stable iteration times after the first run, indicating consistent performance for repeated workloads.
118
-
- Compared to typical x86_64 VMs, Arm64 performance is comparable for lightweight TypeScript computations, with potential advantages in power efficiency and cost for cloud deployments.
118
+
119
+
This demonstrates that Google Cloud C4A Arm64 virtual machines provide production-grade stability and throughput for TypeScript workloads, whether used for application logic, scripting, or performance-critical services.
0 commit comments