You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
9
9
10
10
### Added
11
11
12
+
- Support publishing events consumed from [NATS](https://nats.io) topics. See the [documentation](https://accenture.github.io/reactive-interaction-gateway/docs/event-streams.html#nats) for how to get started. [#297](https://github.com/Accenture/reactive-interaction-gateway/issues/297)
13
+
- Added validation for reverse proxy configuration. Now it crashes RIG on start when configuration is not valid or returns `400` when using REST API to update configuration. [#277](https://github.com/Accenture/reactive-interaction-gateway/issues/277)
14
+
- Added basic distributed tracing support in [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) with Jaeger and Openzipkin exporters. RIG opens a span at the API Gateway and emits trace context in Cloud Events following the [distributed tracing spec](https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md). [#281](https://github.com/Accenture/reactive-interaction-gateway/issues/281)
15
+
- Added possibility to set response code for `response_from` messages in reverse proxy (`kafka` and `http_async`). [#321](https://github.com/Accenture/reactive-interaction-gateway/pull/321)
16
+
- Added new version - `v3` - for internal endpoints to support response code in the `/responses` endpoint
12
17
- Added Helm v3 template to the `deployment` folder [#288](https://github.com/Accenture/reactive-interaction-gateway/issues/288)
13
18
14
19
### Changed
15
20
16
-
- Support publishing events consumed from [NATS](https://nats.io) topics. See the [documentation](https://accenture.github.io/reactive-interaction-gateway/docs/event-streams.html#nats) for how to get started. [#297](https://github.com/Accenture/reactive-interaction-gateway/issues/297)
17
-
- Added validation for reverse proxy configuration. Now it crashes RIG on start when configuration is not valid or returns `400` when using REST API to update configuration. [#277](https://github.com/Accenture/reactive-interaction-gateway/issues/277)
18
-
- Added basic distributed tracing support in [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) with Jaeger and Openzipkin exporters. RIG opens a span at the API Gateway and emits trace context in Cloud Events following the [distributed tracing spec](https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md). [#281](https://github.com/Accenture/reactive-interaction-gateway/issues/281)
21
+
- Incorporated [cloudevents-ex](https://github.com/kevinbader/cloudevents-ex) to handle binary and structured modes for [Kafka protocol binding](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md) in a proper way. This introduces some **breaking changes**:
22
+
- Binary mode is now using `ce_` prefix for CloudEvents context attribute headers, before it was `ce-` - done according to the [Kafka protocol binding](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md)
23
+
- Change above affects also `"response_from": "kafka"` proxy functionality. RIG will forward to clients only Kafka body, no headers. This means, when using binary mode, clients receive only the data part, no CloudEvents context attributes.
24
+
- Changed `response_from` handler to expect a message in binary format, **NOT** a cloud event (`kafka` and `http_async`). [#321](https://github.com/Accenture/reactive-interaction-gateway/pull/321)
19
25
- Updated Helm v2 template, kubectl yaml file and instructions in the `deployment` folder [#288](https://github.com/Accenture/reactive-interaction-gateway/issues/288)
Copy file name to clipboardExpand all lines: docs/api-gateway.md
+28-17Lines changed: 28 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,11 +46,11 @@ This defines a single service called "my-service". The URL is read from an given
46
46
As a demo service, we use a small Node.js script:
47
47
48
48
```js
49
-
consthttp=require('http');
49
+
consthttp=require("http");
50
50
constport=3000;
51
51
consthandler= (_req, res) =>res.end("Hi, I'm a demo service!\n");
52
52
constserver=http.createServer(handler);
53
-
server.listen(port, err=> {
53
+
server.listen(port, (err)=> {
54
54
if (err) {
55
55
returnconsole.error(err);
56
56
}
@@ -180,7 +180,7 @@ The endpoint expects the following request format:
180
180
### Wait for response
181
181
182
182
Sometimes it makes sense to provide a simple request-response API to something that runs asynchronously on the backend. For example, let's say there's a ticket reservation process that takes 10 seconds in total and involves three different services that communicate via message passing. For an external client, it may be simpler to wait 10 seconds for the response instead of polling for a response every other second.
183
-
A behavior like this can be configured using an endpoints'`response_from` property. When set to `kafka`, the response to the request is not taken from the `target` (e.g., for `target` = `http` this means the backend's HTTP response is ignored), but instead it's read from a Kafka topic. In order to enable RIG to correlate the response from the topic with the original request, RIG adds a correlation ID to the request (using a query parameter in case of `target` = `http`, or backed into the produced CloudEvent otherwise). Backend services that work with the request need to include that correlation ID in their response; otherwise, RIG won't be able to forward it to the client (and times out).
183
+
A behavior like this can be configured using an endpoints'`response_from` property. When set to `kafka`, the response to the request is not taken from the `target` (e.g., for `target` = `http` this means the backend's HTTP response is ignored), but instead it's read from a Kafka topic. In order to enable RIG to correlate the response from the topic with the original request, RIG adds a correlation ID to the request (using a query parameter in case of `target` = `http`, or backed into the produced CloudEvent otherwise). **Backend services that work with the request need to include that correlation ID in their response; otherwise, RIG won't be able to forward it to the client (and times out).**
184
184
185
185
Configuration of such API endpoint might look like this:
186
186
@@ -205,26 +205,37 @@ Configuration of such API endpoint might look like this:
205
205
206
206
> Note the presence of `response_from` field. This tells RIG to wait for different event with the same correlation ID.
207
207
208
-
As an alternative you can set `response_from` to `http_async`. This means that correlated response has to be sent to internal `:4010/v2/responses` `POST` endpoint with a body like this:
Copy file name to clipboardExpand all lines: docs/avro.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,37 +24,37 @@ Adopting Avro for event (de)serialization is fairly straightforward. First you n
24
24
## RIG as a Kafka producer
25
25
26
26
- producer evaluates if serialization is turned on by checking `KAFKA_SERIALIZER` environment variable and if it's value is `avro`
27
-
- If it is, creates headers for Kafka event by appending `ce-` prefix for every field, besides`data` field
27
+
- If it is, creates headers for Kafka event by appending `ce_` prefix for every field, except`data` field - **binary mode**
28
28
-**nested context attributes are stringified**, since Kafka headers don't support nested values (this is common when using Cloud events extensions)
29
29
- after that, the `data` field is serialized using the schema name (function for getting schemas from registry is cached in-memory)
30
30
- producer sends event with created headers and data (in binary format `<<0, 0, 0, 0, 1, 5, 3, 8, ...>>`) to Kafka
31
31
32
-
> If `KAFKA_SERIALIZER` is not set to `avro`, producer sets **only**`ce-contenttype` or `ce-contentType` for kafka event
32
+
> If `KAFKA_SERIALIZER` is not set to `avro`, producer sets **only**`ce_contenttype` or `ce_contentType` for kafka event
33
33
34
34
## RIG as a Kafka consumer
35
35
36
-
- when consuming Kafka event, RIG checks headers of such event and removes `ce-` prefix
37
-
- based on headers decides cloud events version and content type
38
-
- In case content type is `avro/binary`, schema ID is taken from event value and deserialized
39
-
- If content type is **not** present, RIG checks for Avro format (`<<0, 0, 0, 0, 1, 5, 3, 8, ...>>`) and attempts for deserialization, otherwise event is sent to client as it is without any deserialization
36
+
Event parsing is based on the [Kafka Transport Binding for CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md) implemented via [cloudevents-ex](https://github.com/kevinbader/cloudevents-ex). Check the [Event format](./event-format.md#kafka-transport-binding) section.
40
37
41
38
## Example 1: producing to and consuming from the same topic
42
39
43
40
In this example we'll have RIG send a message to itself to see whether RIG producing and consuming parts work correctly. The idea is that RIG produces a serialized event as a result to an HTTP request and, a few moments later, consumes that same event (and deserializes it correctly).
44
41
45
42
```bash
43
+
46
44
## 1. Start Kafka with Zookeeper and Kafka Schema Registry
45
+
47
46
KAFKA_PORT_PLAIN=17092 KAFKA_PORT_SSL=17093 HOST=localhost docker-compose -f integration_tests/kafka_tests/docker-compose.yml up -d
48
47
49
48
## 2. Start Rig
49
+
50
50
# Here we say to use Avro, consume on topic "rigRequest" and use "rigRequest-value" schema from Kafka Schema Registry
51
51
# Proxy is turned on to be able to produce Kafka event with headers (needed for cloud events)
0 commit comments