Skip to content

Commit 6ccfe83

Browse files
authored
Merge pull request #35 from glideapps/api-update-glide-pr-31344
OpenAPI spec update from glideapps/glide#31344
2 parents 8ed92eb + 7ff25f5 commit 6ccfe83

File tree

8 files changed

+84
-26
lines changed

8 files changed

+84
-26
lines changed

api-reference/v2/general/errors.mdx

Lines changed: 40 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,4 +52,43 @@ curl --request PUT \
5252
"message": "Invalid request params: Stash ID must be 256 characters max, alphanumeric with dashes and underscores, no leading dash or underscore"
5353
}
5454
}
55-
```
55+
```
56+
57+
### Invalid Row Data
58+
59+
When adding or updating rows in a table, if the row data does not match the table schema, the API will return a `422` response status.
60+
61+
#### Unkown Column
62+
63+
```json
64+
{
65+
"error": {
66+
"type": "column_id_not_found",
67+
"message": "Unknown column ID 'foo'"
68+
}
69+
}
70+
```
71+
72+
#### Invalid Value for Column
73+
74+
```json
75+
{
76+
"error": {
77+
"type": "column_has_invalid_value",
78+
"message": "Invalid value for column 'foo'"
79+
}
80+
}
81+
```
82+
83+
### Row Not Found
84+
85+
When attempting to update a row that does not exist, the API will return a `404` response status.
86+
87+
```json
88+
{
89+
"error": {
90+
"type": "row_not_found",
91+
"message": "Row with ID 'XHz6kF2XSTGi1ADDbryjqw' not found"
92+
}
93+
}
94+
```

api-reference/v2/resources/changelog.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,11 @@ title: Glide API Changelog
33
sidebarTitle: Changelog
44
---
55

6+
### December 13, 2024
7+
8+
- Clarified that endpoints return row IDs in the same order as the input rows.
9+
- Clarified the requirements for row data to match the table's schema and what happens if it doesn't.
10+
611
### November 26, 2024
712

813
- Added a warning that using the `PUT /tables` endpoint to overwrite a table will clear user-specific columns.

api-reference/v2/tables/delete-table-row.mdx

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,6 @@ title: Delete Row
33
openapi: delete /tables/{tableID}/rows/{rowID}
44
---
55

6-
Deletes a row in a Big Table. No error is returned if the row does not exist.
6+
Deletes a row in a Big Table.
7+
8+
No error is returned if the row does not exist.

api-reference/v2/tables/patch-table-row.mdx

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,6 @@ title: Update Row
33
openapi: patch /tables/{tableID}/rows/{rowID}
44
---
55

6-
Updates an existing row in a Big Table.
6+
Updates an existing row in a Big Table.
7+
8+
If a column is not included in the passed row data, it will not be updated. If a column is passed that does not exist in the table schema, or with a value that does not match the column's type, the default behavior is for no update to be made and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.

api-reference/v2/tables/post-table-rows.mdx

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,11 @@ title: Add Rows to Table
33
openapi: post /tables/{tableID}/rows
44
---
55

6-
Add row data to an existing Big Table.
6+
Add one or more rows to an existing Big Table.
77

8-
Row data may be passed in JSON, CSV, or TSV format.
8+
Row IDs for the added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.
9+
10+
If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the table schema, or with a value that does not match the column's type, the default behavior is for no rows to be added and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.
911

1012
## Examples
1113

@@ -31,7 +33,7 @@ Row data may be passed in JSON, CSV, or TSV format.
3133
</Accordion>
3234

3335
<Accordion title="Add Rows from Stash">
34-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
36+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
3537

3638
Then, to add all the row data in a stash to the table in a single atomic operation, use the `$stashID` reference in the `rows` field instead of providing the data inline:
3739

api-reference/v2/tables/post-tables.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,11 @@ openapi: post /tables
55

66
Create a new Big Table, define its structure, and (optionally) populate it with data.
77

8-
When using a CSV or TSV request body, the name of the table must be passed as a query parameter and the schema of the table is inferred from the content. Alternatively, the CSV/TSV content may be [stashed](/api-reference/v2/stashing/introduction), and then the schema and name may be passed in the regular JSON payload.
8+
Row IDs for any added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.
9+
10+
When using a CSV or TSV request body, the name of the table must be passed as a query parameter and the schema of the table is always inferred from the content. Alternatively, the CSV/TSV content may be [stashed](/api-reference/v2/stashing/introduction), and then the schema and name may be passed in the regular JSON payload.
11+
12+
If a schema is passed in the payload, any passed row data must match that schema. If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the schema, or with a value that does not match the column's type, the default behavior is for the table to not be created and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.
913

1014
## Examples
1115

@@ -30,7 +34,7 @@ When using a CSV or TSV request body, the name of the table must be passed as a
3034
However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
3135
</Accordion>
3236
<Accordion title="Create Table from Stash">
33-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
37+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
3438

3539
Then, to create a table from a stash, you can use the `$stashID` reference in the `rows` field instead of providing the data inline:
3640

api-reference/v2/tables/put-tables.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,11 @@ title: Overwrite Table
33
openapi: put /tables/{tableID}
44
---
55

6-
Overwrite an existing Big Table by clearing all rows and adding new data.
6+
Overwrite an existing Big Table by clearing all rows and (optionally) adding new data.
7+
8+
Row IDs for any added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format.
9+
10+
If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the updated table schema, or with a value that does not match the column's type, the default behavior is for no action to be taken and the API call to [return an error](/api-reference/v2/general/errors#invalid-row-data). However, you can control this behavior with the `onSchemaError` query parameter.
711

812
<Warning>
913
There is currently no way to supply values for user-specific columns in the API. Those columns will be cleared when using this endpoint.
@@ -44,7 +48,7 @@ When using a CSV or TSV request body, you cannot pass a schema. If you need to u
4448
</Accordion>
4549

4650
<Accordion title="Reset table data from Stash">
47-
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
51+
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/put-stashes-serial).
4852

4953
Then, to reset a table's data from the stash, use the `$stashID` reference in the `rows` field instead of providing the data inline:
5054

openapi/swagger.json

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -151,12 +151,12 @@
151151
"type": "array",
152152
"items": {
153153
"type": "string",
154-
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
155-
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
154+
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
155+
"example": "zcJWnyI8Tbam21V34K8MNA"
156156
},
157-
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
157+
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
158158
"example": [
159-
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
159+
"zcJWnyI8Tbam21V34K8MNA",
160160
"93a19-cf7c-44437-b8c1-e9acbbb"
161161
]
162162
}
@@ -517,12 +517,12 @@
517517
"type": "array",
518518
"items": {
519519
"type": "string",
520-
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
521-
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
520+
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
521+
"example": "zcJWnyI8Tbam21V34K8MNA"
522522
},
523-
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
523+
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
524524
"example": [
525-
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
525+
"zcJWnyI8Tbam21V34K8MNA",
526526
"93a19-cf7c-44437-b8c1-e9acbbb"
527527
]
528528
}
@@ -1135,12 +1135,12 @@
11351135
"type": "array",
11361136
"items": {
11371137
"type": "string",
1138-
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
1139-
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
1138+
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
1139+
"example": "zcJWnyI8Tbam21V34K8MNA"
11401140
},
1141-
"description": "Row IDs of added rows, e.g., \n\n```json\n[\n\t\"2a1bad8b-cf7c-44437-b8c1-e3782df6\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
1141+
"description": "Row IDs of added rows, returned in the same order as the input rows, e.g., \n\n```json\n[\n\t\"zcJWnyI8Tbam21V34K8MNA\",\n\t\"93a19-cf7c-44437-b8c1-e9acbbb\"\n]\n```",
11421142
"example": [
1143-
"2a1bad8b-cf7c-44437-b8c1-e3782df6",
1143+
"zcJWnyI8Tbam21V34K8MNA",
11441144
"93a19-cf7c-44437-b8c1-e9acbbb"
11451145
]
11461146
}
@@ -1513,8 +1513,8 @@
15131513
"in": "path",
15141514
"schema": {
15151515
"type": "string",
1516-
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
1517-
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
1516+
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
1517+
"example": "zcJWnyI8Tbam21V34K8MNA"
15181518
},
15191519
"required": true
15201520
},
@@ -1653,8 +1653,8 @@
16531653
"in": "path",
16541654
"schema": {
16551655
"type": "string",
1656-
"description": "ID of the row, e.g., `2a1bad8b-cf7c-44437-b8c1-e3782df6`",
1657-
"example": "2a1bad8b-cf7c-44437-b8c1-e3782df6"
1656+
"description": "ID of the row, e.g., `zcJWnyI8Tbam21V34K8MNA`",
1657+
"example": "zcJWnyI8Tbam21V34K8MNA"
16581658
},
16591659
"required": true
16601660
}

0 commit comments

Comments
 (0)