You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are multiple broker implementations available for use with {ServerlessProductName}, each of which have different event delivery guarantees and use different underlying technologies. You can choose the broker implementation when creating a broker by specifying a broker class, otherwise the default broker class is used. The default broker class can be configured by cluster administrators.
5
+
// TO DO: Need to add docs about setting default broker class.
6
+
7
+
[id="serverless-using-brokers-channel-based"]
8
+
== Channel-based broker
9
+
10
+
The channel-based broker implementation internally uses channels for event delivery. Channel-based brokers provide different event delivery guarantees based on the channel implementation a broker instance uses, for example:
11
+
12
+
* A broker using the `InMemoryChannel` implementation is useful for development and testing purposes, but does not provide adequate event delivery guarantees for production environments.
13
+
14
+
* A broker using the `KafkaChannel` implementation provides the event delivery guarantees required for a production environment.
15
+
16
+
[id="serverless-using-brokers-kafka"]
17
+
== Kafka broker
18
+
19
+
The Kafka broker is a broker implementation that uses Kafka internally to provide at-least once delivery guarantees. It supports multiple Kafka versions, and has a native integration with Kafka for storing and routing events.
Copy file name to clipboardExpand all lines: modules/serverless-install-kafka-odc.adoc
+16-1Lines changed: 16 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,10 +21,25 @@ spec:
21
21
bootstrapServers: <bootstrap_servers> <2>
22
22
source:
23
23
enabled: true <3>
24
+
broker:
25
+
enabled: true <4>
26
+
defaultConfig:
27
+
bootstrapServers: <bootstrap_servers> <5>
28
+
numPartitions: <num_partitions> <6>
29
+
replicationFactor: <replication_factor> <7>
24
30
----
25
31
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
26
32
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
27
33
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
34
+
<4> Enables developers to use the Knative Kafka broker implementation in the cluster.
35
+
<5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
36
+
<6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`.
37
+
<7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`.
38
+
+
39
+
[NOTE]
40
+
====
41
+
The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.
42
+
====
28
43
29
44
.Prerequisites
30
45
@@ -42,7 +57,7 @@ spec:
42
57
+
43
58
[IMPORTANT]
44
59
====
45
-
To use the Kafka channelor Kafka source on your cluster, you must toggle the *Enable* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, you must specify the Boostrap Servers.
60
+
To use the Kafka channel, source, or broker on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel or broker, you must specify the bootstrap servers.
46
61
====
47
62
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
48
63
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
= Configuring SASL authentication for Kafka brokers
7
+
8
+
As a cluster administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9
+
10
+
.Prerequisites
11
+
12
+
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
13
+
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
14
+
* You have a username and password for a Kafka cluster.
15
+
* You have chosen the SASL mechanism to use, for example `PLAIN`, `SCRAM-SHA-256`, or `SCRAM-SHA-512`.
16
+
* If TLS is enabled, you also need the `ca.crt` certificate file for the Kafka cluster.
17
+
18
+
[NOTE]
19
+
====
20
+
It is recommended to enable TLS in addition to SASL.
21
+
====
22
+
23
+
.Procedure
24
+
25
+
. Create the certificate files as a secret in the `knative-eventing` namespace:
= Configuring TLS authentication for Kafka brokers
7
+
8
+
As a cluster administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9
+
10
+
.Prerequisites
11
+
12
+
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
13
+
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
14
+
* You have a Kafka cluster CA certificate stored as a `.pem` file.
15
+
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
16
+
17
+
.Procedure
18
+
19
+
. Create the certificate files as a secret in the `knative-eventing` namespace:
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your {product-title} cluster.
12
+
13
+
.Procedure
14
+
15
+
. Create a Kafka-based broker as a YAML file:
16
+
+
17
+
[source,yaml]
18
+
----
19
+
apiVersion: eventing.knative.dev/v1
20
+
kind: Broker
21
+
metadata:
22
+
annotations:
23
+
eventing.knative.dev/broker.class: Kafka <1>
24
+
name: example-kafka-broker
25
+
spec:
26
+
config:
27
+
apiVersion: v1
28
+
kind: ConfigMap
29
+
name: kafka-broker-config <2>
30
+
namespace: knative-eventing
31
+
----
32
+
<1> The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be `Kafka`.
33
+
<2> The default config map for Knative Kafka brokers. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator.
Using Kafka components in an event-driven architecture provides "at least once" event delivery. This means that operations are retried until a return code value is received. This makes applications more resilient to lost events; however, it might result in duplicate events being sent.
0 commit comments