You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/pipelines/concepts/how-pipelines-work.mdx
+1-8Lines changed: 1 addition & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,15 +29,8 @@ Ingested data is buffered until a sufficiently large batch of data has accumulat
29
29
30
30
Finally, the the batch of data is converted into output files, which are compressed, and delivered to the configured R2 bucket. Any transient failures, such as network failures, are automatically retried.
31
31
32
-
Once files have been successfully written to R2, the buffers are flushed.
33
-
34
32
## How a Pipeline handles updates
35
-
Data delivery is guaranteed even while updating an existing pipeline. Updating an existing pipeline effectively creates a new deployment, including all your previously configured options. Requests are gracefully re-routed to the new pipeline. The old pipeline continues to write data into your destination. We spin down the old pipeline only after all the data has been written out. This way, you won’t lose data even while updating a pipeline.
36
-
37
-
## How Pipelines scale
38
-
Pipelines are organized into shards. You can [customize the number of shards](/pipelines/build-with-pipelines/shards) to increase maximum throughput, or to reduce the number of output files generated.
39
-
40
-
Each shard consists of layers of durable objects. Shards are stateless, so your pipeline can scale by horizontally scaling the number of shards.
33
+
Data delivery is guaranteed even while updating an existing pipeline. Updating an existing pipeline effectively creates a new deployment, including all your previously configured options. Requests are gracefully re-routed to the new pipeline. The old pipeline continues to write data into your destination. Once the old pipeline is fully drained, it is spun down.
41
34
42
35
## What if I send too much data? Do Pipelines communicate backpressure?
43
36
If you send too much data, the pipeline will communicate backpressure by returning a 429 response to HTTP requests, or throwing an error if using the Workers API. Refer to the [limits](/pipelines/platform/limits) to learn how much volume a single pipeline can support.
Copy file name to clipboardExpand all lines: src/content/docs/pipelines/index.mdx
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,10 +21,10 @@ Ingest real time data streams, and load to R2, using Cloudflare Pipelines.
21
21
22
22
<Plantype="paid" />
23
23
24
-
Cloudflare Pipelines lets you ingest high volumes of real time data, without managing any infrastructure. A single Pipeline can ingest up to 100 MB of data per second. Ingested data is automatically batched, written to output files, and delivered to an R2 bucket in your account.
24
+
Cloudflare Pipelines lets you ingest high volumes of real time data, without managing any infrastructure. A single pipeline can ingest up to 100 MB of data per second. Ingested data is automatically batched, written to output files, and delivered to an R2 bucket in your account. You can use Pipelines to build a data lake of clickstream data, or to archive logs from a service.
25
25
26
-
## Build your first Pipeline
27
-
You can create a Pipeline with a single command.
26
+
## Create your first pipeline
27
+
You can setup a pipeline to ingest data via HTTP, and deliver output to R2, with a single command:
0 commit comments