Skip to content

Commit 7ff25f5

Browse files
committed
Fix broken links
1 parent 0d95372 commit 7ff25f5

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

api-reference/v2/tables/post-table-rows.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ If a column is not included in the passed row data, it will be empty in the adde
3333
</Accordion>
3434

3535
<Accordion title="Add Rows from Stash">
36-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
36+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
3737

3838
Then, to add all the row data in a stash to the table in a single atomic operation, use the `$stashID` reference in the `rows` field instead of providing the data inline:
3939

api-reference/v2/tables/post-tables.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ If a schema is passed in the payload, any passed row data must match that schema
3434
However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
3535
</Accordion>
3636
<Accordion title="Create Table from Stash">
37-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
37+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
3838

3939
Then, to create a table from a stash, you can use the `$stashID` reference in the `rows` field instead of providing the data inline:
4040

api-reference/v2/tables/put-tables.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ When using a CSV or TSV request body, you cannot pass a schema. If you need to u
4848
</Accordion>
4949

5050
<Accordion title="Reset table data from Stash">
51-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
51+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
5252

5353
Then, to reset a table's data from the stash, use the `$stashID` reference in the `rows` field instead of providing the data inline:
5454

0 commit comments

Comments
 (0)