You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/versioned_docs/version-0.15.0/hardware-requirements.md
+24-10Lines changed: 24 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,20 +4,34 @@ title: Hardware Requirements
4
4
5
5
# Hardware Requirements :computer:
6
6
7
-
The following specifications outline the hardware required to run a Juno node. These specifications are categorised into minimal and recommended requirements for different usage scenarios.
7
+
Juno can be used either as part of a **validator** setup during Starkent staking v2 ([read more](https://nethermindeth.github.io/starknet-staking-v2/)) or as a **full node** serving RPC requests. Hardware requirements will vary depending on the intended usage.
8
8
9
-
## Minimal requirements
9
+
Each hardware component impacts different aspects of node performance:
10
+
-**High-speed CPU cores** allow the node to execute Cairo-heavy RPC methods more quickly such as `starknet_traceTransaction` or `starknet_estimateFee`.
11
+
-**Multiple CPU cores** (or threads) enable Juno to perform more tasks concurrently, which becomes especially important when serving a high volume of RPC requests.
12
+
-**More RAM** reduces the likelihood of slowdowns when handling multiple data-intensive RPC requests.
13
+
-**Fast SSD storage** significantly improves the overall node performance. Nearly all internal processes require reading data (for RPC purposes) and writing data (during syncing). Faster disk I/O directly translates into faster request handling and synchronization.
10
14
11
-
-**CPU**: At least 2 cores
12
-
-**RAM**: 4GB or more
13
-
-**Storage**: 600GB or more (SSD recommended; note: storage needs will grow over time)
15
+
:::tip
16
+
Remember to always pair your hardware accordingly. Having a very powerful CPU will provide minimal improvements if paired with a disk with slow read and write speeds.
17
+
:::
18
+
19
+
## Minimum requirements (Validators)
14
20
15
-
## Recommended requirements
21
+
These requirements are the absolute minimum to comfortably run a Juno node. They will allow the node to keep in sync as well as performing validation duties. Additionally, it will be well capable of serving RPC request needs for individuals or small groups.
16
22
17
-
-**CPU**: High-performance CPU with 4 or more cores
23
+
-**CPU**: 4 CPU cores
18
24
-**RAM**: 8GB or more
19
-
-**Storage**: High-performance SSD with at least 600GB to accommodate future growth
25
+
-**Storage**: High-speed NVMe SSD drive
26
+
27
+
## Recommended requirements (RPC providers)
28
+
29
+
With this configuration it will be possible for Juno nodes to work as servers to satisfy multiple RPC requests.
30
+
31
+
-**CPU**: 16 high-speed CPU cores
32
+
-**RAM**: 64GB of RAM
33
+
-**Storage**: Highest speed NVMe SSD drive
20
34
21
35
:::tip
22
-
We intend the above specifications as a guideline. The minimal requirements support basic node operations, and the recommended settings ensure optimal performance and scalability for future needs.
23
-
:::
36
+
We intend the above specifications as a guideline. You should set the hardware requirements that fit best for your usage. If unsure, feel free to [reach the team](https://juno.nethermind.io/#community-and-support)!
Copy file name to clipboardExpand all lines: docs/versioned_docs/version-0.15.0/plugins.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Juno Plugins
2
+
title: Plugins
3
3
---
4
4
5
5
Juno supports plugins that satisfy the `JunoPlugin` interface, enabling developers to extend and customize Juno's behaviour and functionality by dynamically loading external plugins during runtime.
If you prefer the traditional two-step approach or have limited bandwidth, you can download the snapshot first and extract it later:
62
+
Two-step approach where we first download the snapshot and extract it later. Note that this will create the requirement to have twice the space required for the Juno snapshot. If space is not enough you can always try the **alternative method** below.
zstd -d juno_mainnet.tar.zst -c | tar -xvf - -C $HOME/snapshots/mainnet
109
87
```
110
88
89
+
90
+
#### Alternative method: Stream the snapshot directly
91
+
:::warning
92
+
Streaming can become unreliable if the network conditions are not extremely good, requiring multiple restarts. Resort to this if disk space is at a premium.
93
+
:::
94
+
95
+
Create a subfolder inside `$HOME/snapshots` where to stream the download:
96
+
97
+
```bash
98
+
# For Mainnet
99
+
mkdir $HOME/snapshots/mainnet/
100
+
```
101
+
102
+
Download and extract the snapshot directly to your target directory:
It is important for full nodes to scale accordingly to the hardware where they are being executed. To unlock this, the following are a list of configurations users can update based on their hardware specs to maximize the performance of their Juno node.
6
+
7
+
The default values of each of these options are set to maximize performance with a machine that matches Juno's minimum requirements – described in the **Hardware Requirement** section.
8
+
9
+
## Database Compression
10
+
11
+
Set by the `--db-compression` flag, it applies a compression algorithm over the database **every time** Juno writes to it.
12
+
13
+
Available options:
14
+
-`snappy`: Fast compression with a low compression ratio
15
+
-`zstd`: Slower but reduces storage quite a lot
16
+
-`minlz`: Alternative compression option
17
+
18
+
Depending on the compression algorithm used it becomes a trade-off between **disk space** and **CPU** usage every time there is a disk operation.
19
+
20
+
We recommend `zstd` because it is fast enough that it doesn't delays any process significantly while providing huge database size reduction.
21
+
22
+
:::info
23
+
Note that once the compression is changed the new database is not compressed immediately, but gradually through the node usage by writing new information.
24
+
:::
25
+
26
+
:::info
27
+
There is a secret `zstd1` option that provides far greater performance than `zstd` but we are still testing it out and it might become the default later.
28
+
:::
29
+
30
+
## Database Memory Table Size
31
+
32
+
Set by the `--db-memtable-size` flag (default: 256 MB), this controls the amount of memory allocated for the database memtable. The memtable is an in-memory buffer where writes are stored before being flushed to disk.
33
+
34
+
A sensible default is **256 MB** for nodes that satisfy the minimum requirements. Increasing this value reduces the frequency of disk flushes, which can improve write throughput during sync.
35
+
36
+
:::warning
37
+
Setting this value too high can cause **uneven write performance**. Larger memtables mean flushes happen less frequently but involve more data at once, leading to bursty I/O patterns. If writes accumulate faster than the database can flush, writes will stall entirely until flushing catches up. A moderate value like 256 MB balances flush frequency with I/O smoothness.
38
+
:::
39
+
40
+
## Database Compaction Concurrency
41
+
42
+
Set by the `--db-compaction-concurrency` flag, this controls how many concurrent compaction workers the database uses. Compaction is the background process that merges and optimises data on disk.
43
+
44
+
Format options:
45
+
-`N`: Sets the range from 1 to N workers (e.g., `--db-compaction-concurrency=4`)
46
+
-`M,N`: Sets the range from M to N workers (e.g., `--db-compaction-concurrency=2,8`)
47
+
48
+
The default is `1,GOMAXPROCS/2` (half of available CPU cores). Increasing the upper bound on systems with many cores can speed up compaction as well as increase CPU resources usage while syncing parts of the network where there was a lot of usage.
49
+
50
+
:::info
51
+
Note that this effectively improve syncing speed while behind the tip of the chain but after reaching the latest block resource usage will gradually reduce back to minimums.
52
+
:::
53
+
54
+
## Database Cache Size
55
+
56
+
Set by the `--db-cache-size` flag (default: 1024 MB), this determines the amount of memory allocated for caching frequently accessed data from the database.
57
+
58
+
A larger cache reduces disk reads and improves query performance. On systems with ample memory, increasing this value (e.g., 2048 or 4096 MB) can significantly improve RPC response times and overall node performance.
0 commit comments