Skip to content

Commit c1d1628

Browse files
committed
Update the concepts section
1 parent 51b382a commit c1d1628

File tree

1 file changed

+98
-1
lines changed

1 file changed

+98
-1
lines changed

content/event-platform/concepts.md

Lines changed: 98 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The following properties represent a subscription:
3535
| `id` | Automatically populated | Identifier of the subscription within the Event Platform |
3636
| `source` | Populated by the user at creation | Source of the event (currently, the only source available is `pim`) |
3737
| `subject` | From `X-PIM-URL` header parameter | URL of the targeted source |
38-
| `type` | Populated by the user at creation | Type of the subscription (currently, there are two available types: `https` and `pubsub`) |
38+
| `type` | Populated by the user at creation | Type of the subscription (currently, there are three available types: `https`, `pubsub`, and `kafka`) |
3939
| `events` | Populated by the user at creation | A list of events that the subscription is tracking |
4040
| `status` | Automatically populated | The subscription status |
4141
| `config` | Populated by the user at creation | The subscription configuration is based on the subscription type. See below for further details. |
@@ -205,6 +205,103 @@ We currently use a static IP address provided by Google Cloud: `34.140.80.128`
205205

206206
**However, we cannot guarantee that this IP address will remain unchanged indefinitely.** Therefore, we strongly recommend whitelisting the `europe-west1` IP ranges from [Google Cloud's IP ranges list](https://www.gstatic.com/ipranges/cloud.json) to ensure continuous access.
207207

208+
### Kafka subscription
209+
210+
This option delivers events to an Apache Kafka topic. It provides high-throughput, fault-tolerant event streaming capabilities for enterprise integrations.
211+
212+
#### Key Advantages
213+
214+
- **High Throughput:** Kafka is designed to handle high-volume event streams with low latency.
215+
- **Durability & Reliability:** Events are persisted to disk and replicated across multiple brokers for fault tolerance.
216+
- **Scalability:** Kafka clusters can be scaled horizontally to handle increasing event volumes.
217+
- **Ordering Guarantees:** Events are delivered in order within each partition.
218+
219+
#### Configuration
220+
221+
For the `kafka` subscription type, the `config` property requires the Kafka cluster connection details and topic information.
222+
223+
```json[snippet:Kafka subscription]
224+
225+
{
226+
"source": "pim",
227+
"subject": "https://my-pim.cloud.akeneo.com",
228+
"events": [
229+
"com.akeneo.pim.v1.product.updated"
230+
],
231+
"type": "kafka",
232+
"config": {
233+
"broker": "kafka-cluster.example.com:9092",
234+
"topic": "pim-events",
235+
"sasl_auth": {
236+
"mechanism": "plain",
237+
"username": "your_kafka_username",
238+
"password": "your_kafka_password"
239+
}
240+
}
241+
}
242+
```
243+
244+
#### Authentication Examples
245+
246+
**Plain Authentication:**
247+
```json
248+
"sasl_auth": {
249+
"mechanism": "plain",
250+
"username": "your_kafka_username",
251+
"password": "your_kafka_password"
252+
}
253+
```
254+
255+
**SCRAM Authentication:**
256+
```json
257+
"sasl_auth": {
258+
"mechanism": "scram",
259+
"scram_variant": "sha-256",
260+
"username": "your_kafka_username",
261+
"password": "your_kafka_password"
262+
}
263+
```
264+
265+
**OAuth Bearer Authentication:**
266+
```json
267+
"sasl_auth": {
268+
"mechanism": "oauthbearer",
269+
"mode": "static_token",
270+
"token": "your_oauth_token"
271+
}
272+
```
273+
274+
#### Required Configuration Properties
275+
276+
| Property | Description | Required |
277+
| --- | --- | --- |
278+
| `broker` | Kafka broker address | Yes |
279+
| `topic` | Name of the Kafka topic where events will be published | Yes |
280+
| `sasl_auth` | SASL authentication configuration object | Yes |
281+
282+
#### SASL Authentication Properties
283+
284+
| Property | Description | Required | Valid Values |
285+
| --- | --- | --- | --- |
286+
| `mechanism` | SASL authentication mechanism | Yes | `plain`, `scram`, `oauthbearer` |
287+
| `username` | Username for authentication | Required for `plain` and `scram` | String |
288+
| `password` | Password for authentication | Required for `plain` and `scram` | String |
289+
| `scram_variant` | SCRAM variant (only for `scram` mechanism) | Required for `scram` | `sha-256`, `sha-512` |
290+
| `token` | OAuth token (only for `oauthbearer` mechanism) | Required for `oauthbearer` | String |
291+
| `mode` | OAuth mode (only for `oauthbearer` mechanism) | Required for `oauthbearer` | `static_token` |
292+
293+
#### Event Delivery Guarantees
294+
295+
- **At-least-once delivery:** Events are guaranteed to be delivered at least once to the Kafka topic.
296+
- **Ordering:** Events are delivered in the order they were generated within each partition.
297+
- **Retry mechanism:** Failed deliveries are automatically retried with exponential backoff.
298+
299+
#### Monitoring and Troubleshooting
300+
301+
- Monitor Kafka consumer lag to ensure your consumers are processing events in a timely manner.
302+
- Set up alerts for failed deliveries and consumer group lag.
303+
- Use Kafka's built-in monitoring tools to track topic health and performance.
304+
208305
## Subscription Filters
209306

210307
When configuring a subscription, you can optionally define a **filter** to receive **only the events that match specific criteria**.

0 commit comments

Comments
 (0)