You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -16013,6 +16014,131 @@ It is the action of dividing bulk coretime into multiple contiguous regions. Thi
16013
16014
Polkadot has dedicated cores assigned to provide core time on demand. These cores are excluded from the coretime sales and are reserved for on-demand parachains, which pay in DOT per block.
description: Learn how elastic scaling in Polkadot boosts parachain throughput, reduces latency, and supports dynamic, cost-efficient resource allocation.
16022
+
---
16023
+
16024
+
# Elastic Scaling
16025
+
16026
+
## Introduction
16027
+
16028
+
Polkadot's architecture delivers scalability and security through its shared security model, where the relay chain coordinates and validates multiple parallel chains.
16029
+
16030
+
Elastic scaling enhances this architecture by allowing parachains to utilize multiple computational cores simultaneously, breaking the previous 1:1 relationship between parachain and relay chain blocks.
16031
+
16032
+
This technical advancement enables parachains to process multiple blocks within a single relay chain block, significantly increasing throughput capabilities. By leveraging [Agile Coretime](/polkadot-protocol/architecture/polkadot-chain/agile-coretime){target=\_blank}, parachains can dynamically adjust their processing capacity based on demand, creating an efficient and responsive infrastructure for high-throughput applications.
16033
+
16034
+
## How Elastic Scaling Works
16035
+
16036
+
Elastic scaling enables parachains to process multiple blocks in parallel by utilizing additional cores on the relay chain. This section provides a technical analysis of the performance advantages and implementation details.
16037
+
16038
+
Consider a parachain that needs to process four consecutive parablocks (P1-P4). With traditional single-core allocation, the validation process follows a strictly sequential pattern. Each parablock undergoes a two-phase process on the relay chain:
16039
+
16040
+
1. **Backing phase** - validators create and distribute validity statements
16041
+
2. **Inclusion phase** - the parablock is included in the relay chain after availability verification
16042
+
16043
+
In the single-core scenario (assuming a 6-second relay chain block time), processing four parablocks requires approximately 30 seconds:
16044
+
16045
+
```mermaid
16046
+
sequenceDiagram
16047
+
participant R1 as R1
16048
+
participant R2 as R2
16049
+
participant R3 as R3
16050
+
participant R4 as R4
16051
+
participant R5 as R5
16052
+
16053
+
Note over R1,R5: Single Core Scenario
16054
+
16055
+
rect rgb(200, 220, 240)
16056
+
Note right of R1: Core C1
16057
+
R1->>R1: Back P1
16058
+
R2->>R2: Include P1
16059
+
R2->>R2: Back P2
16060
+
R3->>R3: Include P2
16061
+
R3->>R3: Back P3
16062
+
R4->>R4: Include P3
16063
+
R4->>R4: Back P4
16064
+
R5->>R5: Include P4
16065
+
end
16066
+
```
16067
+
16068
+
With elastic scaling utilizing two cores simultaneously, the same four parablocks can be processed in approximately 18 seconds:
16069
+
16070
+
```mermaid
16071
+
sequenceDiagram
16072
+
participant R1 as R1
16073
+
participant R2 as R2
16074
+
participant R3 as R3
16075
+
participant R4 as R4
16076
+
participant R5 as R5
16077
+
16078
+
Note over R1,R3: Multi-Core Scenario
16079
+
16080
+
rect rgb(200, 220, 240)
16081
+
Note right of R1: Core C1
16082
+
R1->>R1: Back P1
16083
+
R2->>R2: Include P1
16084
+
R2->>R2: Back P2
16085
+
R3->>R3: Include P2
16086
+
end
16087
+
16088
+
rect rgb(220, 200, 240)
16089
+
Note right of R1: Core C2
16090
+
R1->>R1: Back P3
16091
+
R2->>R2: Include P3
16092
+
R2->>R2: Back P4
16093
+
R3->>R3: Include P4
16094
+
end
16095
+
```
16096
+
16097
+
The relay chain processes these multiple parablocks as independent validation units during backing, availability, and approval phases. However, during inclusion, it verifies that their state roots align properly to maintain chain consistency.
16098
+
16099
+
From an implementation perspective:
16100
+
16101
+
- **Parachain side** - collators must increase their block production rate to fully utilize multiple cores
16102
+
- **Validation process** - each core operates independently but with coordinated state verification
16103
+
- **Resource management** -cores are dynamically allocated based on parachain requirements
16104
+
- **State consistency** - while backed and processed in parallel, the parablocks maintain sequential state transitions
16105
+
16106
+
## Benefits of Elastic Scaling
16107
+
16108
+
- **Increased throughput** - multiple concurrent cores enable parachains to process transactions at multiples of their previous capacity. By breaking the 1:1 relationship between parachain and relay chain blocks, applications can achieve significantly higher transaction volumes
16109
+
16110
+
- **Lower latency** - transaction finality improves substantially with multi-core processing. Parachains currently achieve 2-second latency with three cores, with projected improvements to 500ms using 12 cores, enabling near-real-time application responsiveness
16111
+
16112
+
- **Resource efficiency** - applications acquire computational resources precisely matched to their needs, eliminating wasteful over-provisioning. Coretime can be purchased at granular intervals (blocks, hours, days), creating cost-effective operations particularly for applications with variable transaction patterns
16113
+
16114
+
- **Scalable growth** - new applications can launch with minimal initial resource commitment and scale dynamically as adoption increases. This eliminates the traditional paradox of either over-allocating resources (increasing costs) or under-allocating (degrading performance) during growth phases
16115
+
16116
+
- **Workload distribution** - parachains intelligently distribute workloads across cores during peak demand periods and release resources when traffic subsides. Paired with secondary coretime markets, this ensures maximum resource utilization across the entire network ecosystem
16117
+
16118
+
- **Reliable performance** - end-users experience reliable application performance regardless of network congestion levels. Applications maintain responsiveness even during traffic spikes, eliminating performance degradation that commonly impacts blockchain applications during high-demand periods.
16119
+
16120
+
## Use Cases
16121
+
16122
+
Elastic scaling enables applications to dynamically adjust their resource consumption based on real-time demand. This is especially valuable for decentralized applications where usage patterns can be highly variable. The following examples illustrate common scenarios where elastic scaling delivers significant performance and cost-efficiency benefits:
16123
+
16124
+
### Handling Sudden Traffic Spikes
16125
+
16126
+
Many decentralized applications experience unpredictable, high-volume traffic bursts, especially in areas like gaming, DeFi protocols, NFT auctions, messaging platforms, and social media. Elastic scaling allows these systems to acquire additional coretime during peak usage and release it during quieter periods, ensuring responsiveness without incurring constant high infrastructure costs.
16127
+
16128
+
### Supporting Early-Stage Growth
16129
+
16130
+
Startups and new projects often begin with uncertain or volatile demand. With elastic scaling, teams can launch with minimal compute resources (e.g., a single core) and gradually scale as adoption increases. This prevents overprovisioning and enables cost-efficient growth until the application is ready for more permanent or horizontal scaling.
16131
+
16132
+
### Scaling Massive IoT Networks
16133
+
16134
+
Internet of Things (IoT) applications often involve processing data from millions of devices in real time. Elastic scaling supports this need by enabling high-throughput transaction processing as demand fluctuates. Combined with Polkadot’s shared security model, it provides a reliable and privacy-preserving foundation for large-scale IoT deployments.
16135
+
IoT Applications
16136
+
16137
+
### Powering Real-Time, Low-Latency Systems
16138
+
16139
+
Applications like payment processors, trading platforms, gaming engines, or real-time data feeds require fast, consistent performance. Elastic scaling can reduce execution latency during demand spikes, helping ensure low-latency, reliable service even under heavy load.
description: Learn how elastic scaling in Polkadot boosts parachain throughput, reduces latency, and supports dynamic, cost-efficient resource allocation.
4
+
---
5
+
6
+
# Elastic Scaling
7
+
8
+
## Introduction
9
+
10
+
Polkadot's architecture delivers scalability and security through its shared security model, where the relay chain coordinates and validates multiple parallel chains.
11
+
12
+
Elastic scaling enhances this architecture by allowing parachains to utilize multiple computational cores simultaneously, breaking the previous 1:1 relationship between parachain and relay chain blocks.
13
+
14
+
This technical advancement enables parachains to process multiple blocks within a single relay chain block, significantly increasing throughput capabilities. By leveraging [Agile Coretime](/polkadot-protocol/architecture/polkadot-chain/agile-coretime){target=\_blank}, parachains can dynamically adjust their processing capacity based on demand, creating an efficient and responsive infrastructure for high-throughput applications.
15
+
16
+
## How Elastic Scaling Works
17
+
18
+
Elastic scaling enables parachains to process multiple blocks in parallel by utilizing additional cores on the relay chain. This section provides a technical analysis of the performance advantages and implementation details.
19
+
20
+
Consider a parachain that needs to process four consecutive parablocks (P1-P4). With traditional single-core allocation, the validation process follows a strictly sequential pattern. Each parablock undergoes a two-phase process on the relay chain:
21
+
22
+
1.**Backing phase** - validators create and distribute validity statements
23
+
2.**Inclusion phase** - the parablock is included in the relay chain after availability verification
24
+
25
+
In the single-core scenario (assuming a 6-second relay chain block time), processing four parablocks requires approximately 30 seconds:
26
+
27
+
```mermaid
28
+
sequenceDiagram
29
+
participant R1 as R1
30
+
participant R2 as R2
31
+
participant R3 as R3
32
+
participant R4 as R4
33
+
participant R5 as R5
34
+
35
+
Note over R1,R5: Single Core Scenario
36
+
37
+
rect rgb(200, 220, 240)
38
+
Note right of R1: Core C1
39
+
R1->>R1: Back P1
40
+
R2->>R2: Include P1
41
+
R2->>R2: Back P2
42
+
R3->>R3: Include P2
43
+
R3->>R3: Back P3
44
+
R4->>R4: Include P3
45
+
R4->>R4: Back P4
46
+
R5->>R5: Include P4
47
+
end
48
+
```
49
+
50
+
With elastic scaling utilizing two cores simultaneously, the same four parablocks can be processed in approximately 18 seconds:
51
+
52
+
```mermaid
53
+
sequenceDiagram
54
+
participant R1 as R1
55
+
participant R2 as R2
56
+
participant R3 as R3
57
+
participant R4 as R4
58
+
participant R5 as R5
59
+
60
+
Note over R1,R3: Multi-Core Scenario
61
+
62
+
rect rgb(200, 220, 240)
63
+
Note right of R1: Core C1
64
+
R1->>R1: Back P1
65
+
R2->>R2: Include P1
66
+
R2->>R2: Back P2
67
+
R3->>R3: Include P2
68
+
end
69
+
70
+
rect rgb(220, 200, 240)
71
+
Note right of R1: Core C2
72
+
R1->>R1: Back P3
73
+
R2->>R2: Include P3
74
+
R2->>R2: Back P4
75
+
R3->>R3: Include P4
76
+
end
77
+
```
78
+
79
+
The relay chain processes these multiple parablocks as independent validation units during backing, availability, and approval phases. However, during inclusion, it verifies that their state roots align properly to maintain chain consistency.
80
+
81
+
From an implementation perspective:
82
+
83
+
-**Parachain side** - collators must increase their block production rate to fully utilize multiple cores
84
+
-**Validation process** - each core operates independently but with coordinated state verification
85
+
-**Resource management** -cores are dynamically allocated based on parachain requirements
86
+
-**State consistency** - while backed and processed in parallel, the parablocks maintain sequential state transitions
87
+
88
+
## Benefits of Elastic Scaling
89
+
90
+
-**Increased throughput** - multiple concurrent cores enable parachains to process transactions at multiples of their previous capacity. By breaking the 1:1 relationship between parachain and relay chain blocks, applications can achieve significantly higher transaction volumes
91
+
92
+
-**Lower latency** - transaction finality improves substantially with multi-core processing. Parachains currently achieve 2-second latency with three cores, with projected improvements to 500ms using 12 cores, enabling near-real-time application responsiveness
93
+
94
+
-**Resource efficiency** - applications acquire computational resources precisely matched to their needs, eliminating wasteful over-provisioning. Coretime can be purchased at granular intervals (blocks, hours, days), creating cost-effective operations particularly for applications with variable transaction patterns
95
+
96
+
-**Scalable growth** - new applications can launch with minimal initial resource commitment and scale dynamically as adoption increases. This eliminates the traditional paradox of either over-allocating resources (increasing costs) or under-allocating (degrading performance) during growth phases
97
+
98
+
-**Workload distribution** - parachains intelligently distribute workloads across cores during peak demand periods and release resources when traffic subsides. Paired with secondary coretime markets, this ensures maximum resource utilization across the entire network ecosystem
99
+
100
+
-**Reliable performance** - end-users experience reliable application performance regardless of network congestion levels. Applications maintain responsiveness even during traffic spikes, eliminating performance degradation that commonly impacts blockchain applications during high-demand periods.
101
+
102
+
## Use Cases
103
+
104
+
Elastic scaling enables applications to dynamically adjust their resource consumption based on real-time demand. This is especially valuable for decentralized applications where usage patterns can be highly variable. The following examples illustrate common scenarios where elastic scaling delivers significant performance and cost-efficiency benefits:
105
+
106
+
### Handling Sudden Traffic Spikes
107
+
108
+
Many decentralized applications experience unpredictable, high-volume traffic bursts, especially in areas like gaming, DeFi protocols, NFT auctions, messaging platforms, and social media. Elastic scaling allows these systems to acquire additional coretime during peak usage and release it during quieter periods, ensuring responsiveness without incurring constant high infrastructure costs.
109
+
110
+
### Supporting Early-Stage Growth
111
+
112
+
Startups and new projects often begin with uncertain or volatile demand. With elastic scaling, teams can launch with minimal compute resources (e.g., a single core) and gradually scale as adoption increases. This prevents overprovisioning and enables cost-efficient growth until the application is ready for more permanent or horizontal scaling.
113
+
114
+
### Scaling Massive IoT Networks
115
+
116
+
Internet of Things (IoT) applications often involve processing data from millions of devices in real time. Elastic scaling supports this need by enabling high-throughput transaction processing as demand fluctuates. Combined with Polkadot’s shared security model, it provides a reliable and privacy-preserving foundation for large-scale IoT deployments.
117
+
IoT Applications
118
+
119
+
### Powering Real-Time, Low-Latency Systems
120
+
121
+
Applications like payment processors, trading platforms, gaming engines, or real-time data feeds require fast, consistent performance. Elastic scaling can reduce execution latency during demand spikes, helping ensure low-latency, reliable service even under heavy load.
0 commit comments