You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: rfcs/DeferStream.md
+107Lines changed: 107 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -170,6 +170,113 @@ For these reasons, this proposal only supports `@defer` on fragment spreads and
170
170
171
171
The GraphQL WG is not ruling out supporting `@defer` on fields in the future if additional use-cases are discovered, but it is no longer being considered for this proposal.
172
172
173
+
## Potential concerns, challenges, and drawbacks
174
+
175
+
### Client re-renders
176
+
177
+
With incremental delivery, where multiple responses are delivered in one request, client code could re-render its UI multiple times in a short period of time. This could degrade performance of the application, negating the performance gains from using `@defer` or `@stream`. There are a few approaches that could be taken to mitigate this. Each of these approaches are orthogonal to one another, i.e. the working group could decide that more than one of these should be included in the spec or be labeled as best practices.
178
+
179
+
These solutions require the GraphQL client to efficiently process multiple responses at the same time. (Relay support added here: https://github.com/facebook/relay/commit/b4c92a23ae061943ea7a2ddb5e2f7686d3af8c0e)
180
+
181
+
1.__Client relies on transport to receive multiple responses.__ If the incremental responses are being sent over HTTP connection with chunked encoding, the client may receive multiple responses in a single read of HTTP stream and process them at the same time. This is only likely to happen when the responses are small and sent very close together. This would not work for all possible transport types, e.g. web sockets where each frame is received separately.
182
+
183
+
For example, the client might receive several responses at once:
184
+
```
185
+
186
+
---
187
+
Content-Type: application/json
188
+
Content-Length: 125
189
+
190
+
{
191
+
"path": ["viewer","itemSearch","edges",5],
192
+
"data": {
193
+
"node": {
194
+
"item": {
195
+
"attribute": "Vintage 1950s Swedish Scandinavian Modern"
196
+
}
197
+
}
198
+
}
199
+
}
200
+
201
+
---
202
+
Content-Type: application/json
203
+
Content-Length: 126
204
+
205
+
{
206
+
"path": ["viewer","itemSearch","edges",6],
207
+
"data": {
208
+
"node": {
209
+
"item": {
210
+
"attribute": "Mid-20th Century Italian Hollywood Regency"
211
+
}
212
+
}
213
+
}
214
+
}
215
+
216
+
---
217
+
Content-Type: application/json
218
+
Content-Length: 124
219
+
220
+
{
221
+
"path": ["viewer","itemSearch","edges",7],
222
+
"data": {
223
+
"node": {
224
+
"item": {
225
+
"attribute": "Vintage 1950s Italian Mid-Century Modern"
226
+
}
227
+
}
228
+
}
229
+
}
230
+
```
231
+
232
+
It could then process each of these before triggering a re-render.
233
+
234
+
2.__Client side debounce.__ GraphQL clients can debounce the processing of responses before triggering a re-render. For a query that contains `@defer` or `@stream`, the client will wait a predetermined amount of time starting from when a response is received. If any additional responses are received in that time, it can process the results in one batch. This has the downside of adding latency; if no additional responses are receiving in the timeout period, the processing of the initial response is delayed by the length of the debounce timeout. There is also significant complexity in determining the most optimal amount of time for debouncing. Even if this "magic number" is determined by analzing historical performance data, it is not constant and must be re-evaluated as queries and server implementation changes over time.
235
+
236
+
3.__Server sends batched responses.__ This approach changes the spec to allow GraphQL to return either the current GraphQL Response map, or a list of GraphQL Response maps. This gives the server the flexibility to determine when it is beneficial to group incremental responses together. If several responses are ready at the same time, the server can deliver them together. The server may also have knowledge of how long resolvers will take to resolve and could choose to debounce. It is also worth noting that a naive debouncing algorithm on the server could also result in degraded performance by introducing latency.
237
+
238
+
An example batched response:
239
+
240
+
```json
241
+
---
242
+
[
243
+
{
244
+
"path": ["viewer","itemSearch","edges",5],
245
+
"data": {
246
+
"node": {
247
+
"item": {
248
+
"attribute": "Vintage 1950s Swedish Scandinavian Modern"
249
+
}
250
+
}
251
+
}
252
+
},
253
+
{
254
+
"path": ["viewer","itemSearch","edges",6],
255
+
"data": {
256
+
"node": {
257
+
"item": {
258
+
"attribute": "Mid-20th Century Italian Hollywood Regency"
259
+
}
260
+
}
261
+
}
262
+
},
263
+
{
264
+
"path": ["viewer","itemSearch","edges",7],
265
+
"data": {
266
+
"node": {
267
+
"item": {
268
+
"attribute": "Vintage 1950s Italian Mid-Century Modern"
269
+
}
270
+
}
271
+
}
272
+
}
273
+
]
274
+
```
275
+
276
+
277
+
278
+
1.__Server can ignore `@defer`/`@stream`.__ This approach allows the GraphQL server to treat `@defer` and `@stream` as hints. The server can ignore these directives and include the deferred data in previous responses. This requires clients to be written with the expectation that deferred data could arrive in either its own incrementally delivered response or part of a previously delivered response. This solution does not require the client to be able to process multiple responses at the same time.
279
+
173
280
# Additional material
174
281
-[1][Lee Byron on idea of @defer and @stream](https://www.youtube.com/watch?v=ViXL0YQnioU&feature=youtu.be&t=9m4s)
175
282
-[2][[Proposal] Introducing @defer in Apollo Server](https://blog.apollographql.com/introducing-defer-in-apollo-server-f6797c4e9d6e)
0 commit comments