You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repo contains a suite of fixutres & tools to track the performance of package managers.
3
+
This repo contains a suite of fixtures & tools to track the performance of package managers. We benchmark various Node.js package managers (npm, yarn, pnpm, berry, deno, bun, vlt, nx, turbo) across different project types and scenarios.
4
4
5
-
###Environment
5
+
## Environment
6
6
7
-
We current only test the latest `linux` runner which is the most common GitHub Action environment. The [standard GitHub-hosted public runner environment](https://docs.github.com/en/actions/using-github-hosted-runners/using-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) specs are below:
7
+
We currently only test the latest `linux` runner which is the most common GitHub Action environment. The [standard GitHub-hosted public runner environment](https://docs.github.com/en/actions/using-github-hosted-runners/using-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) specs are below:
8
8
9
9
- VM: Linux
10
10
- Processor (CPU): 4
11
11
- Memory (RAM): 16 GB
12
12
- Storage (SSD): 14 GB
13
13
- Workflow label: `ubuntu-latest`
14
14
15
-
We may add Mac/Windows in the future but the it will likely exponentially increase the already slow run time of the suite (**~20min**).
15
+
We may add Mac/Windows in the future but it will likely exponentially increase the already slow run time of the suite (**~20min**).
16
16
17
-
### Configuration/Normalization
17
+
##Overview
18
18
19
-
We do a best-effort job to configure each tool to behave as similar as possible to its peers but there's limitations to this standardization in many scenarios (as each tool makes decisions about its default support for security checks/validations/feature-set). As part of the normalization process, we count the number of packages - post-installation - & use that to determine the average speed relative to the number of packages installed. This strategy helps account for when there are significant discrepencies between the package manager's dependency graph resolution ([you can read/see more here](https://docs.google.com/presentation/d/1ojXF4jb_1MyGhew2LCbdrZ4e_0vYUr-7CoMJLJsHwZY/edit?usp=sharing)).
19
+
The benchmarks measure:
20
+
- Project installation times (cold and warm cache)
21
+
- Task execution performance
22
+
- Standard deviation of results
20
23
21
-
#### Example:
24
+
### Project Types
25
+
- Next.js
26
+
- Astro
27
+
- Svelte
28
+
- Vue
29
+
30
+
### Package Managers
31
+
- npm
32
+
- yarn
33
+
- pnpm
34
+
- Yarn Berry
35
+
- Deno
36
+
- Bun
37
+
- VLT
38
+
- NX
39
+
- Turbo
40
+
- Node.js
41
+
42
+
## Configuration/Normalization
22
43
44
+
We do a best-effort job to configure each tool to behave as similar as possible to its peers but there's limitations to this standardization in many scenarios (as each tool makes decisions about its default support for security checks/validations/feature-set). As part of the normalization process, we count the number of packages - post-installation - & use that to determine the average speed relative to the number of packages installed. This strategy helps account for when there are significant discrepancies between the package manager's dependency graph resolution ([you can read/see more here](https://docs.google.com/presentation/d/1ojXF4jb_1MyGhew2LCbdrZ4e_0vYUr-7CoMJLJsHwZY/edit?usp=sharing)).
45
+
46
+
#### Example:
23
47
-**Package Manager A** installs **1,000** packages in **10s** -> an avg. of **~10ms** per-package
24
48
-**Package Manager B** installs **10** packages in **1s** -> an avg. of **~100ms** per-package
25
49
26
-
###Testing Package Installation
50
+
## Testing Package Installation
27
51
28
52
The installation tests we run today mimic a cold-cache scenario for a variety of test fixtures (ie. we install the packages of a `next`, `vue`, `svelte` & `astro` starter project). We will likely add lockfile & warm cache tests in the near future.
29
53
@@ -37,7 +61,7 @@ The installation tests we run today mimic a cold-cache scenario for a variety of
37
61
-`deno`
38
62
-`bun`
39
63
40
-
###Testing Script Execution (WIP)
64
+
## Testing Script Execution (WIP)
41
65
42
66
This suite also tests the performance of basic script execution (ex. `npm run foo`). Notably, for any given build, test or deployment task the spawning of the process is a fraction of the overall execution time. That said, this is a commonly tracked workflow by various developer tools as it involves the common set of tasks: startup, filesystem read (`package.json`) & finally, spawning the process/command.
43
67
@@ -54,6 +78,107 @@ This suite also tests the performance of basic script execution (ex. `npm run fo
54
78
-`turborepo`
55
79
-`nx`
56
80
57
-
### Output
58
-
59
-
Results of the test runs are found in the Actions Artifacts "results". We will eventually add a visualization of the results at some point in the future.
0 commit comments