You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. either globally, with `server.use` for all incoming requests,
43
+
1. either globally, with `app.use` for all incoming requests,
44
44
2. or at method level with `addService`, where instead of a single handler, an array of handlers can be provided (handler and middleware have the same API).
45
45
46
46
```typescript
47
-
server.use(ctx=> {
47
+
app.use(call=> {
48
48
/*...*/
49
49
})
50
-
server.addService(CatService, {
50
+
app.addService(CatService, {
51
51
getCat: [
52
-
ctx=> {
52
+
call=> {
53
53
/*...*/
54
54
},
55
-
ctx=> {
55
+
call=> {
56
56
/*...*/
57
57
},
58
58
],
@@ -63,21 +63,21 @@ Note that grpc does not provide API to intercept all incoming requests, only to
63
63
64
64
#### `next` function
65
65
66
-
Here is an example of a simple logger middleware. Apart from `context` each middleware (handler alike) has a `next` function. This is callstack of all subsequent middlewares and handlers. This feature is demonstrated in a simple logger middleware bellow.
66
+
Here is an example of a simple logger middleware. Apart from `call` each middleware (handler alike) has a `next` function. This is callstack of all subsequent middlewares and handlers. This feature is demonstrated in a simple logger middleware bellow.
All middlewares are executed in order they were registered, followed by an execution of handlers is provided order, regardless of middleware-service order. Not that in the following example, `C` middleware is registered after `CatService` and it is still called, even before the handlers.
Error handling can be solved with a custom simple middleware, thanks to existing `next` cascading mechanism:
138
138
139
139
```typescript
140
-
server.use(async (ctx, next) => {
140
+
app.use(async (call, next) => {
141
141
try {
142
142
awaitnext()
143
143
} catch (error) {
@@ -158,17 +158,17 @@ There is an `onError` middleware creator, that can intercept all errors includin
158
158
```typescript
159
159
import { onError } from'protocat'
160
160
161
-
server.use(
162
-
onError((e, ctx) => {
161
+
app.use(
162
+
onError((e, call) => {
163
163
// Set metadata
164
-
ctx.initialMetadata.set('error-code', e.code)
165
-
ctx.trailingMetadata.set('error-code', e.code)
164
+
call.initialMetadata.set('error-code', e.code)
165
+
call.trailingMetadata.set('error-code', e.code)
166
166
167
167
// Consume the error
168
168
if (notThatBad(e)) {
169
-
if (ctx.type===CallType.SERVER_STREAM||ctx.type===CallType.BIDI) {
169
+
if (call.type===CallType.ServerStream||call.type===CallType.Bidi) {
170
170
// sync error not re-thrown on stream response, should end
171
-
ctx.end()
171
+
call.end()
172
172
}
173
173
return
174
174
}
@@ -182,7 +182,7 @@ server.use(
182
182
)
183
183
```
184
184
185
-
- The handler is called with error and context for all errors (rejects from handlers, error emits from streams), meaning there can be theoretically more errors per request (multiple emitted errors) and some of them can be handled even after the executon of the next chain (error emits).
185
+
- The handler is called with error and current call for all errors (rejects from handlers, error emits from streams), meaning there can be theoretically more errors per request (multiple emitted errors) and some of them can be handled even after the executon of the next chain (error emits).
186
186
- Provided function can be sync on async. It can throw (or return rejected promise), but any other return value is ignored
187
187
- Both initial and trailing metadata are available for change (unless you sent them manually)
188
188
- In order to achieve "re-throwing", `emit` function on call is patched by `onError`. When calling `call.emit('error', e)`, the error is actually emitted in the stream only when the handler throws a new error. This means that when you emit an error in the middleware and consume it in the handler, streams are left "hanging", not errored and likely not even ended. If you truly wish to not propagate the error to client, it is recommended to end the streams in the handler. (This is not performed automatically, since there is no guarantee there should be no more than one error)
0 commit comments