|
987 | 987 | {"page_id": "reference-parachains-consensus-elastic-scaling", "page_title": "Elastic Scaling", "index": 5, "depth": 3, "title": "Supporting Early-Stage Growth", "anchor": "supporting-early-stage-growth", "start_char": 6786, "end_char": 7168, "estimated_token_count": 69, "token_estimator": "heuristic-v1", "text": "### Supporting Early-Stage Growth\n\nStartups and new projects often begin with uncertain or volatile demand. With elastic scaling, teams can launch with minimal compute resources (e.g., a single core) and gradually scale as adoption increases. This prevents overprovisioning and enables cost-efficient growth until the application is ready for more permanent or horizontal scaling."} |
988 | 988 | {"page_id": "reference-parachains-consensus-elastic-scaling", "page_title": "Elastic Scaling", "index": 6, "depth": 3, "title": "Scaling Massive IoT Networks", "anchor": "scaling-massive-iot-networks", "start_char": 7168, "end_char": 7556, "estimated_token_count": 67, "token_estimator": "heuristic-v1", "text": "### Scaling Massive IoT Networks\n\nInternet of Things (IoT) applications often involve processing data from millions of devices in real time. Elastic scaling supports this need by enabling high-throughput transaction processing as demand fluctuates. Combined with Polkadot’s shared security model, it provides a reliable and privacy-preserving foundation for large-scale IoT deployments."} |
989 | 989 | {"page_id": "reference-parachains-consensus-elastic-scaling", "page_title": "Elastic Scaling", "index": 7, "depth": 3, "title": "Powering Real-Time, Low-Latency Systems", "anchor": "powering-real-time-low-latency-systems", "start_char": 7556, "end_char": 7871, "estimated_token_count": 58, "token_estimator": "heuristic-v1", "text": "### Powering Real-Time, Low-Latency Systems\n\nApplications like payment processors, trading platforms, gaming engines, or real-time data feeds require fast, consistent performance. Elastic scaling can reduce execution latency during demand spikes, helping ensure low-latency, reliable service even under heavy load."} |
990 | | -{"page_id": "reference-parachains-consensus-inclusion-pipeline", "page_title": "Inclusion Pipeline", "index": 0, "depth": 2, "title": "Pipeline Stages", "anchor": "pipeline-stages", "start_char": 676, "end_char": 3317, "estimated_token_count": 628, "token_estimator": "heuristic-v1", "text": "## Pipeline Stages\n\nThe inclusion pipeline consists of three main stages:\n\n```mermaid\n%%{init: {\"flowchart\": {\"nodeSpacing\": 40, \"rankSpacing\": 60}}}%%\nflowchart LR\n %% Keep the pipeline on one row (container is hidden)\n subgraph Row[\" \"]\n direction LR\n G[\"Generation\"] --> B[\"Backing\"] --> I[\"Inclusion\"]\n end\n style Row fill:none,stroke:none\n\n %% Context: plain text (no box) pointing to both G and B\n C[\"Context\"]:::nobox\n C -.-> G\n C -.-> B\n\n classDef nobox fill:none,stroke:none,color:inherit;\n```\n**Context**: Context of state is provided as input in order for collators and validators to build a parablock during the generation and backing stages, respectively. This context is provided by two sources:\n\n* **Relay Parent**: The relay chain block which a given parablock is anchored to. Note that the relay parent of a parablock and the relay block including that parablock are always different. This context source lives on the relay chain.\n\n* **Unincluded Segments**: Chains of candidate parablocks that have yet to be included in the relay chain, i.e. they can contain blocks at any stage pre-inclusion. The core functionality that [Async Backing](/reference/parachains/consensus/async-backing) brings is the ability to build on these unincluded segments of block ancestors rather than building only on ancestors included in the relay chain state. This context source lives on the collators.\n\n**Generation**: Collators *execute* their blockchain's core functionality to generate a new block, producing a [proof-of-validity](https://paritytech.github.io/polkadot-sdk/book/types/availability.html?#proof-of-validity) (PoV), which is passed to validators selected for backing. The PoV is composed of:\n\n- The block candidate (list of state transitions)\n- The values in the parachain's database that the block modifies\n- The hashes of the unaffected points in the Merkle tree\n\n\n**Backing**: A subset of active validators verify that the parablock follows the state transition rules of the parachain and sign a [validity statement](https://paritytech.github.io/polkadot-sdk/book/types/backing.html?#validity-attestation) about the PoV which can have a positive or negative outcome. With enough positive statements, the block is backed and noted on the relay chain, but is still pending approval.\n\n**Inclusion**: Validators gossip [erasure code chunks](https://paritytech.github.io/polkadot-sdk/book/types/availability.html#erasure-chunk) and put the parablock through the final [approval process](https://paritytech.github.io/polkadot-sdk/book/protocol-approval.html) before it is considered *included* in the relay chain."} |
| 990 | +{"page_id": "reference-parachains-consensus-inclusion-pipeline", "page_title": "Inclusion Pipeline", "index": 0, "depth": 2, "title": "Pipeline Stages", "anchor": "pipeline-stages", "start_char": 676, "end_char": 3441, "estimated_token_count": 656, "token_estimator": "heuristic-v1", "text": "## Pipeline Stages\n\nThe inclusion pipeline consists of three main stages:\n\n```mermaid\n%%{init: {\"flowchart\": {\"nodeSpacing\": 40, \"rankSpacing\": 60}}}%%\nflowchart LR\n %% Keep the pipeline on one row (container is hidden)\n subgraph Row[\" \"]\n direction LR\n G[\"Generation\"] --> B[\"Backing\"] --> I[\"Inclusion\"]\n end\n style Row fill:none,stroke:none\n\n %% Context: plain text (no box) pointing to both G and B\n C[\"Context\"]:::nobox\n C -.-> G\n C -.-> B\n\n classDef nobox fill:none,stroke:none,color:inherit;\n```\n**Context**: Context of state is provided as input in order for collators and validators to build a parablock during the generation and backing stages, respectively. This context is provided by two sources:\n\n* **Relay Parent**: The relay chain block which a given parablock is anchored to. Note that the relay parent of a parablock and the relay block including that parablock are always different. This context source lives on the relay chain.\n\n* **Unincluded Segments**: Chains of candidate parablocks that have yet to be included in the relay chain, i.e. they can contain blocks at any stage pre-inclusion. The core functionality that [Async Backing](/reference/parachains/consensus/async-backing) brings is the ability to build on these unincluded segments of block ancestors rather than building only on ancestors included in the relay chain state. This context source lives on the collators.\n\n**Generation**: Collators *execute* their blockchain's core functionality to generate a new block, producing a [proof-of-validity](https://paritytech.github.io/polkadot-sdk/book/types/availability.html?#proof-of-validity) (PoV), which is passed to validators selected for backing. The PoV is composed of:\n\n- The block candidate (list of state transitions)\n- The values in the parachain's database that the block modifies\n- The hashes of the unaffected points in the Merkle tree\n\n\n**Backing**: A subset of active validators verify that the parablock follows the state transition rules of the parachain and sign a [validity statement](https://paritytech.github.io/polkadot-sdk/book/types/backing.html?#validity-attestation) about the PoV which can have a positive or negative outcome. With enough positive statements (at least 2/3 of assigned validators), the candidate is considered backable. It is then noted in a fork on the relay chain, at which point it is considered backed, ready for the next stage of the pipeline.\n\n**Inclusion**: Validators gossip [erasure code chunks](https://paritytech.github.io/polkadot-sdk/book/types/availability.html#erasure-chunk) and put the parablock through the final [approval process](https://paritytech.github.io/polkadot-sdk/book/protocol-approval.html) before it is considered *included* in the relay chain."} |
991 | 991 | {"page_id": "reference-parachains-consensus-old-notes", "page_title": "reference-parachains-consensus-old-notes", "index": 0, "depth": 3, "title": "Compute Advantage", "anchor": "compute-advantage", "start_char": 0, "end_char": 690, "estimated_token_count": 214, "token_estimator": "heuristic-v1", "text": "### Compute Advantage\nBelow is a table showing the main advantages of asynchronous over synchronous backing.\n\n| | Sync Backing | Async Backing | Async Backing Advantage |\n| ------------------------------------ | ------------ | ------------ | ----------------------------------------- |\n| **Parablocks included every** | 12 seconds | 6 seconds | **2x** more parablocks included |\n| **Parablock maximum execution time** | 0.5 seconds | 2 seconds | **4x** more execution time in a parablock |\n| **Total Computer Gain (per core)** | | | **8x Compute Throughput** |"} |
992 | 992 | {"page_id": "reference-parachains-cryptography", "page_title": "Cryptography", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 16, "end_char": 525, "estimated_token_count": 73, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nCryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem."} |
993 | 993 | {"page_id": "reference-parachains-cryptography", "page_title": "Cryptography", "index": 1, "depth": 2, "title": "Hash Functions", "anchor": "hash-functions", "start_char": 525, "end_char": 1170, "estimated_token_count": 130, "token_estimator": "heuristic-v1", "text": "## Hash Functions\n\nHash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the [\"pigeonhole principle,\"](https://en.wikipedia.org/wiki/Pigeonhole_principle){target=\\_blank} it is primarily implemented to efficiently and verifiably identify data from large sets."} |
|
0 commit comments