Skip to content

Commit 3af05d6

Browse files
committed
Fixed broken links
1 parent e1a8c28 commit 3af05d6

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

src/content/docs/pipelines/build-with-pipelines/output-settings.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ By default, output files are compressed in the `gzip` format. Compression can be
3434
npx wrangler pipelines update [PIPELINE-NAME] --compression none
3535
```
3636

37-
Output files are named using a [UILD](/https://github.com/ulid/spec) slug, followed by an extension.
37+
Output files are named using a [UILD](https://github.com/ulid/spec) slug, followed by an extension.
3838

3939
## Customize batch behavior
4040
When configuring your pipeline, you can define how records are batched before they are delivered to R2. Batches of records are written out to a single output file.

src/content/docs/pipelines/concepts/how-pipelines-work.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@ We plan to support more sources, data formats, and sinks, in the future.
2323
## Data durability, and the lifecycle of a request
2424
If you make a request to send data to a pipeline, and receive a successful response, we guarantee that the data will be delivered to your configured destination.
2525

26-
Data sent to a pipeline is durably committed to storage. Pipelines use [SQLite backed Durable Objects](durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend) as a buffer for ingested records. A pipeline will only return a response after data has been successfully stored.
26+
Any data sent to a pipeline is durably committed to storage. Pipelines use [SQLite backed Durable Objects](/durable-objects/best-practices/access-durable-objects-storage/#sqlite-storage-backend) as a buffer for ingested records. A pipeline will only return a response after data has been successfully stored.
2727

28-
Ingested data continues to be stored, until a sufficiently large batch of data has accumulated. Batching is useful to reduce the number of output files written out to R2. [Batch sizes are customizable](/pipelines/build-with-pipelines/output-settings/#customize-batch-behavior), in terms of data volume, rows, or time.
28+
Ingested data continues to be stored, until a sufficiently large batch of data has accumulated. Batching is useful to reduce the number of output files written out to R2. [Batch sizes are customizable](/pipelines/build-with-pipelines/output-settings/#customize-batch-behavior), in terms of data volume, rows, or time.
2929

3030
When a batch has reached its target size, the batch entire is written out to a file. The file is optionally compressed, and is delivered to an R2 bucket. Any transient failures, such as network failures, are automatically retried.
3131

src/content/docs/pipelines/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ You can now send data to your pipeline with:
4040
Refer to the [getting started guide](/pipelines/getting-started) to start building with pipelines.
4141

4242
:::note
43-
While in beta, you will not be billed for Pipelines usage. You will be billed only for [R2 operations](r2/pricing/).
43+
While in beta, you will not be billed for Pipelines usage. You will be billed only for [R2 operations](/r2/pricing/).
4444
:::
4545

4646
***

0 commit comments

Comments
 (0)