Skip to content

Commit 4174fd7

Browse files
authored
Merge pull request #18 from glideapps/alex/add-api-limits
Document API operational limits
2 parents 6542c28 + cbfdc0e commit 4174fd7

File tree

6 files changed

+36
-22
lines changed

6 files changed

+36
-22
lines changed
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
title: Limits
3+
description: 'Rate and operational limits for the Glide API'
4+
---
5+
6+
## Payload Limits
7+
8+
You should not send more than 15MB of data in a single request. If you need to work with more data, use [stashing](/api-reference/v2/stashing/introduction) to upload the data in 15MB chunks.
9+
10+
## Row Limits
11+
12+
Even when using stashing, there are limits to the number of rows you can work with in a single request. These limits are approximate and depend on the size of the rows in your dataset.
13+
14+
15+
| Endpoint | Row Limit |
16+
|---------------------------------------------------------------|------------|
17+
| [Create Table](/api-reference/v2/tables/post-tables) | 8m |
18+
| [Overwrite Table](/api-reference/v2/tables/put-tables) | 8m |
19+
| [Add Rows to Table](/api-reference/v2/tables/post-table-rows) | 250k |

api-reference/v2/general/rate-limits.mdx

Lines changed: 0 additions & 12 deletions
This file was deleted.

api-reference/v2/resources/changelog.mdx

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,12 @@ title: Glide API Changelog
33
sidebarTitle: Changelog
44
---
55

6+
### September 13, 2024
7+
8+
- Introduced a new "Limits" document that outlines rate and operational limits for the API.
9+
- Updated guidelines for when to use stashing in line with the new doc.
10+
- Fixed the Bulk Import tutorial to use PUT instead of POST for the Stash Data endpoint.
11+
612
### September 4, 2024
713

814
- Removed "json" as a valid data type in column schemas for now.

api-reference/v2/stashing/introduction.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ Once all data has been uploaded to the stash, the stash can then be referenced i
1313

1414
## When to Use Stashing
1515

16-
You should use stashing when:
16+
You should use stashing when both of the following conditions are met:
1717

18-
* You have a large dataset that you want to upload to Glide. Anything larger than 5mb should be broken up into smaller chunks and stashed.
18+
* You have a large dataset that you want to upload to Glide. Anything larger than [15MB](/api-reference/v2/general/limits) should be broken up into smaller chunks and stashed.
1919
* You want to perform an atomic operation using a large dataset. For example, you may want to perform an import of data into an existing table but don't want users to see the intermediate state of the import or incremental updates while they're using their application.
2020

2121
## Stash IDs and Serials

api-reference/v2/tutorials/bulk-import.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,25 +36,25 @@ You are responsible for ensuring that the stash ID is unique and stable across a
3636

3737
## Upload Data
3838

39-
Once you have a stable stash ID, you can use the [stash data endpoint](/api-reference/v2/stashing/post-stashes-serial) to upload the data in stages.
39+
Once you have a stable stash ID, you can use the [stash data endpoint](/api-reference/v2/stashing/put-stashes-serial) to upload the data in chunks.
4040

41-
Upload stages can be run in parallel to speed up the upload of large dataset, just be sure to use the same stash ID across uploads to ensure the final data set is complete.
41+
Chunks can be sent in parallel to speed up the upload of large datasets. Use the same stash ID across uploads to ensure the final data set is complete, and use the serial to control the order of the chunks within the stash.
4242

43-
As an example, the following [stash](/api-reference/v2/stashing/post-stashes-serial) requests will create a final dataset consisting of the two rows identified by the stash ID `20240501-import`.
43+
As an example, the following [stash](/api-reference/v2/stashing/put-stashes-serial) requests will create a final dataset consisting of the two rows identified by the stash ID `20240501-import`. The trailing parameters of `1` and `2` in the request path are the serial IDs. The data in serial `1` will come first in the stash, and the data in serial `2` will come second, even if the requests are processed in a different order.
4444

4545
<Tabs>
46-
<Tab title="POST /stashes/20240501-import/1">
46+
<Tab title="PUT /stashes/20240501-import/1">
4747
```json
4848
[
4949
{
5050
"Name": "Alex",
5151
"Age": 30,
5252
"Birthday": "2024-07-03T10:24:08.285Z"
53-
}
53+
},
5454
]
5555
```
5656
</Tab>
57-
<Tab title="POST /stashes/20240501-import/2">
57+
<Tab title="PUT /stashes/20240501-import/2">
5858
```json
5959
[
6060
{
@@ -67,7 +67,7 @@ As an example, the following [stash](/api-reference/v2/stashing/post-stashes-ser
6767
</Tab>
6868
</Tabs>
6969

70-
<Note>The trailing parameters of `1` and `2` in the request path are the serial IDs, which distinguish and order the two uploads within the stash.</Note>
70+
<Note>The above is just an example. In practice, you should include more than one row per stash chunk, and if your complete dataset is only 2 rows, you do not need to use stashing at all. See [Limits](/api-reference/v2/general/limits) for guidance.</Note>
7171

7272
## Finalize Import
7373

mint.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@
2727
"pages": [
2828
"api-reference/v2/general/introduction",
2929
"api-reference/v2/general/authentication",
30-
"api-reference/v2/general/errors"
30+
"api-reference/v2/general/errors",
31+
"api-reference/v2/general/limits"
3132
]
3233
},
3334
{

0 commit comments

Comments
 (0)