You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/pipelines/concepts/how-pipelines-work.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,13 +21,13 @@ Finally, Pipelines supports writing data to [R2 Object Storage](/r2/). Ingested
21
21
We plan to support more sources, data formats, and sinks, in the future.
22
22
23
23
## Data durability, and the lifecycle of a request
24
-
If you make a request to send data to a pipeline, and receive a successful response, we guarantee that the data will be delivered.
24
+
If you make a request to send data to a pipeline, and receive a successful response, we guarantee that the data will be delivered to your configured destination.
25
25
26
-
Any data sent to a pipeline is durably committed to storage. Pipelines use [SQLite backed Durable Objects](durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend) as a buffer for ingested records. A pipeline will only return a response after data has been successfully stored.
26
+
Data sent to a pipeline is durably committed to storage. Pipelines use [SQLite backed Durable Objects](durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend) as a buffer for ingested records. A pipeline will only return a response after data has been successfully stored.
27
27
28
-
Ingested data is buffered until a sufficiently large batch of data has accumulated. Batching is useful to reduce the number of output files written out to R2. [Batch sizes are customizable](/pipelines/build-with-pipelines/output-settings/#customize-batch-behavior), in terms of data volume, rows, or time.
28
+
Ingested data continues to be stored, until a sufficiently large batch of data has accumulated. Batching is useful to reduce the number of output files written out to R2. [Batch sizes are customizable](/pipelines/build-with-pipelines/output-settings/#customize-batch-behavior), in terms of data volume, rows, or time.
29
29
30
-
Finally, when a batch has reached its target size, it is written out to a file. The file is compressed, and is delivered to the configured R2 bucket. Any transient failures, such as network failures, are automatically retried.
30
+
When a batch has reached its target size, the batch entire is written out to a file. The file is optionally compressed, and is delivered to an R2 bucket. Any transient failures, such as network failures, are automatically retried.
31
31
32
32
## How a Pipeline handles updates
33
33
Data delivery is guaranteed even while updating an existing pipeline. Updating an existing pipeline effectively creates a new deployment, including all your previously configured options. Requests are gracefully re-routed to the new pipeline. The old pipeline continues to write data into your destination. Once the old pipeline is fully drained, it is spun down.
0 commit comments