You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
13
13
- Added validation for reverse proxy configuration. Now it crashes RIG on start when configuration is not valid or returns `400` when using REST API to update configuration. [#277](https://github.com/Accenture/reactive-interaction-gateway/issues/277)
14
14
- Added basic distributed tracing support in [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) with Jaeger and Openzipkin exporters. RIG opens a span at the API Gateway and emits trace context in Cloud Events following the [distributed tracing spec](https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md). [#281](https://github.com/Accenture/reactive-interaction-gateway/issues/281)
15
15
16
-
<!-- ### Changed -->
16
+
### Changed
17
+
18
+
- Incorporated [cloudevents-ex](https://github.com/kevinbader/cloudevents-ex) to handle binary and structured modes for [Kafka protocol binding](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md) in a proper way. This introduces some **breaking changes**:
19
+
- Binary mode is now using `ce_` prefix for CloudEvents context attribute headers, before it was `ce-` - done according to the [Kafka protocol binding](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md)
20
+
- Change above affects also `"response_from": "kafka"` proxy functionality. RIG will forward to clients only Kafka body, no headers. This means, when using binary mode, clients receive only the data part, no CloudEvents context attributes.
Copy file name to clipboardExpand all lines: docs/api-gateway.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,11 +46,11 @@ This defines a single service called "my-service". The URL is read from an given
46
46
As a demo service, we use a small Node.js script:
47
47
48
48
```js
49
-
consthttp=require('http');
49
+
consthttp=require("http");
50
50
constport=3000;
51
51
consthandler= (_req, res) =>res.end("Hi, I'm a demo service!\n");
52
52
constserver=http.createServer(handler);
53
-
server.listen(port, err=> {
53
+
server.listen(port, (err)=> {
54
54
if (err) {
55
55
returnconsole.error(err);
56
56
}
@@ -182,6 +182,8 @@ The endpoint expects the following request format:
182
182
Sometimes it makes sense to provide a simple request-response API to something that runs asynchronously on the backend. For example, let's say there's a ticket reservation process that takes 10 seconds in total and involves three different services that communicate via message passing. For an external client, it may be simpler to wait 10 seconds for the response instead of polling for a response every other second.
183
183
A behavior like this can be configured using an endpoints'`response_from` property. When set to `kafka`, the response to the request is not taken from the `target` (e.g., for `target` = `http` this means the backend's HTTP response is ignored), but instead it's read from a Kafka topic. In order to enable RIG to correlate the response from the topic with the original request, RIG adds a correlation ID to the request (using a query parameter in case of `target` = `http`, or backed into the produced CloudEvent otherwise). Backend services that work with the request need to include that correlation ID in their response; otherwise, RIG won't be able to forward it to the client (and times out).
184
184
185
+
> In case you want to use _binary_ transport mode, make sure that `rig` extension (containing correlation ID) is prefixed with `ce_` as well.
186
+
185
187
Configuration of such API endpoint might look like this:
Copy file name to clipboardExpand all lines: docs/avro.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,37 +24,37 @@ Adopting Avro for event (de)serialization is fairly straightforward. First you n
24
24
## RIG as a Kafka producer
25
25
26
26
- producer evaluates if serialization is turned on by checking `KAFKA_SERIALIZER` environment variable and if it's value is `avro`
27
-
- If it is, creates headers for Kafka event by appending `ce-` prefix for every field, besides`data` field
27
+
- If it is, creates headers for Kafka event by appending `ce_` prefix for every field, except`data` field - **binary mode**
28
28
-**nested context attributes are stringified**, since Kafka headers don't support nested values (this is common when using Cloud events extensions)
29
29
- after that, the `data` field is serialized using the schema name (function for getting schemas from registry is cached in-memory)
30
30
- producer sends event with created headers and data (in binary format `<<0, 0, 0, 0, 1, 5, 3, 8, ...>>`) to Kafka
31
31
32
-
> If `KAFKA_SERIALIZER` is not set to `avro`, producer sets **only**`ce-contenttype` or `ce-contentType` for kafka event
32
+
> If `KAFKA_SERIALIZER` is not set to `avro`, producer sets **only**`ce_contenttype` or `ce_contentType` for kafka event
33
33
34
34
## RIG as a Kafka consumer
35
35
36
-
- when consuming Kafka event, RIG checks headers of such event and removes `ce-` prefix
37
-
- based on headers decides cloud events version and content type
38
-
- In case content type is `avro/binary`, schema ID is taken from event value and deserialized
39
-
- If content type is **not** present, RIG checks for Avro format (`<<0, 0, 0, 0, 1, 5, 3, 8, ...>>`) and attempts for deserialization, otherwise event is sent to client as it is without any deserialization
36
+
Event parsing is based on the [Kafka Transport Binding for CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md) implemented via [cloudevents-ex](https://github.com/kevinbader/cloudevents-ex). Check the [Event format](./event-format.md#kafka-transport-binding) section.
40
37
41
38
## Example 1: producing to and consuming from the same topic
42
39
43
40
In this example we'll have RIG send a message to itself to see whether RIG producing and consuming parts work correctly. The idea is that RIG produces a serialized event as a result to an HTTP request and, a few moments later, consumes that same event (and deserializes it correctly).
44
41
45
42
```bash
43
+
46
44
## 1. Start Kafka with Zookeeper and Kafka Schema Registry
45
+
47
46
KAFKA_PORT_PLAIN=17092 KAFKA_PORT_SSL=17093 HOST=localhost docker-compose -f integration_tests/kafka_tests/docker-compose.yml up -d
48
47
49
48
## 2. Start Rig
49
+
50
50
# Here we say to use Avro, consume on topic "rigRequest" and use "rigRequest-value" schema from Kafka Schema Registry
51
51
# Proxy is turned on to be able to produce Kafka event with headers (needed for cloud events)
@@ -41,8 +41,6 @@ For further details and examples, checkout the dedicated [Section on Avro](avro)
41
41
42
42
## Transport Bindings
43
43
44
-
While the HTTP transport binding seems solid already, the Kafka transport mode is not yet available. To support Kafka anyway, we have adapted the HTTP transport modes for Kafka. Consequently, the Kafka transport mode implementation may change in the future, depending on the outcome of the [standardization process](https://github.com/cloudevents/spec/pull/337).
45
-
46
44
When sending or receiving an event, you have a couple of options:
47
45
48
46
- Send the event as-is, or send only the "data" (= payload) in the "body" and the rest (the so-called "context attributes") in headers. The former is called _structured_ transport mode, while the latter is known as _binary_ transport mode.
@@ -90,7 +88,7 @@ Request body:
90
88
91
89
### Binary
92
90
93
-
In binary mode the request body only contains the `data` value of the corresponding CloudEvent. The _context attributes_ - i.e., all other fields - are moved into the HTTP header. The data/body encoding is determined by the `content-type` header. At the time of writing there are two content types supported: `application/json` and `avro/binary`.
91
+
In binary mode the request body only contains the `data` value of the corresponding CloudEvent. The _context attributes_ - i.e., all other fields - are moved into the HTTP header (**this means also [extensions](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes)**). The data/body encoding is determined by the `content-type` header. At the time of writing there are two content types supported: `application/json` and `avro/binary`.
94
92
95
93
<details>
96
94
<summary>Same example event, sent using HTTP request in binary mode</summary>
@@ -119,7 +117,7 @@ Request body:
119
117
120
118
## Kafka Transport Binding
121
119
122
-
As mentioned above, the corresponding CloudEvents transport binding specification is still under [active development](https://github.com/cloudevents/spec/pull/337/files). We follow an approach very similar to the HTTP transport binding outlined above. We utilize Kafka headers that have been introduced in Kafka version `0.11`. In order to support older Kafka versions as well, RIG defaults to structured mode and does not require any headers at all (see below).
120
+
Implemented using [Kafka Transport Binding for CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/kafka-protocol-binding.md). We utilize Kafka headers that have been introduced in Kafka version `0.11`. In order to support older Kafka versions as well, RIG defaults to structured mode and does not require any headers at all (see below).
123
121
124
122
Like with the HTTP transport binding, we define two modes of operation: structured and binary.
125
123
@@ -158,7 +156,7 @@ Message body:
158
156
159
157
### Binary
160
158
161
-
In binary mode the message body only contains the `data` value of the corresponding CloudEvent. The _context attributes_ - i.e., all other fields - are moved into the message header. The data/body encoding is determined by the `content-type` header. In this mode there is no default for `content-type` and RIG rejects messages that come without it. At the time of writing there are two content types supported: `application/json` and `avro/binary`.
159
+
In binary mode the message body only contains the `data` value of the corresponding CloudEvent. The _context attributes_ - i.e., all other fields - are moved into the message header (**this means also [extensions](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes)**). The data/body encoding is determined by the `content-type` header. In this mode there is no default for `content-type` and RIG rejects messages that come without it. At the time of writing there are two content types supported: `application/json` and `avro/binary`.
162
160
163
161
<details>
164
162
<summary>Same example in binary mode</summary>
@@ -167,10 +165,10 @@ In binary mode the message body only contains the `data` value of the correspond
167
165
In binary mode the message header contains all context attributes. It also announces the body encoding:
0 commit comments