Skip to content

Commit e05a4d6

Browse files
authored
Merge pull request #40701 from abrennan89/kafkadocs
SRVKE-747: Broker docs clean up and add Kafka broker
2 parents fceecfb + 38588e2 commit e05a4d6

24 files changed

+279
-71
lines changed

_topic_maps/_topic_map.yml

Lines changed: 10 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3210,6 +3210,8 @@ Topics:
32103210
File: serverless-configuring-routes
32113211
- Name: Event sinks
32123212
File: serverless-event-sinks
3213+
- Name: Event delivery
3214+
File: serverless-event-delivery
32133215
- Name: Using the API server source
32143216
File: serverless-apiserversource
32153217
- Name: Using a ping source
@@ -3220,6 +3222,12 @@ Topics:
32203222
File: serverless-creating-channels
32213223
- Name: Subscriptions
32223224
File: serverless-subs
3225+
# Brokers
3226+
- Name: Brokers
3227+
File: serverless-using-brokers
3228+
# Triggers
3229+
- Name: Triggers
3230+
File: serverless-triggers
32233231
- Name: Knative Kafka
32243232
File: serverless-kafka-developer
32253233
# Admin guide
@@ -3229,8 +3237,8 @@ Topics:
32293237
- Name: Configuring OpenShift Serverless
32303238
File: serverless-configuration
32313239
# Eventing
3232-
- Name: Configuring channel defaults
3233-
File: serverless-configuring-channels
3240+
- Name: Configuring Knative Eventing defaults
3241+
File: serverless-configuring-eventing-defaults
32343242
- Name: Knative Kafka
32353243
File: serverless-kafka-admin
32363244
- Name: Creating Knative Eventing components in the Administrator perspective
@@ -3289,19 +3297,6 @@ Topics:
32893297
File: serverless-custom-tls-cert-domain-mapping
32903298
- Name: Security configuration for Knative Kafka
32913299
File: serverless-kafka-security
3292-
# Knative Eventing
3293-
- Name: Knative Eventing
3294-
Dir: knative_eventing
3295-
Topics:
3296-
# Brokers
3297-
- Name: Brokers
3298-
File: serverless-using-brokers
3299-
# Triggers
3300-
- Name: Triggers
3301-
File: serverless-triggers
3302-
# Event delivery
3303-
- Name: Event delivery
3304-
File: serverless-event-delivery
33053300
# Functions
33063301
- Name: Functions
33073302
Dir: functions
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
[id="serverless-broker-types_{context}"]
2+
= Broker types
3+
4+
There are multiple broker implementations available for use with {ServerlessProductName}, each of which have different event delivery guarantees and use different underlying technologies. You can choose the broker implementation when creating a broker by specifying a broker class, otherwise the default broker class is used. The default broker class can be configured by cluster administrators.
5+
// TO DO: Need to add docs about setting default broker class.
6+
7+
[id="serverless-using-brokers-channel-based"]
8+
== Channel-based broker
9+
10+
The channel-based broker implementation internally uses channels for event delivery. Channel-based brokers provide different event delivery guarantees based on the channel implementation a broker instance uses, for example:
11+
12+
* A broker using the `InMemoryChannel` implementation is useful for development and testing purposes, but does not provide adequate event delivery guarantees for production environments.
13+
14+
* A broker using the `KafkaChannel` implementation provides the event delivery guarantees required for a production environment.
15+
16+
[id="serverless-using-brokers-kafka"]
17+
== Kafka broker
18+
19+
The Kafka broker is a broker implementation that uses Kafka internally to provide at-least once delivery guarantees. It supports multiple Kafka versions, and has a native integration with Kafka for storing and routing events.

modules/serverless-channel-default.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
22
//
3-
// * serverless/channels/serverless-channels.adoc
3+
// * serverless/admin_guide/serverless-configuring-eventing-defaults.adoc
44

55
[id="serverless-channel-default_{context}"]
66
= Configuring the default channel implementation

modules/serverless-create-kafka-channel-yaml.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
// Module included in the following assemblies:
22
//
3-
// * serverless/knative_eventing/serverless-creating-channels.adoc
4-
// * serverless/knative_eventing/serverless-kafka.adoc
3+
// * serverless/develop/serverless-creating-channels.adoc
4+
// * serverless/develop/serverless-kafka-developer.adoc
55

66
[id="serverless-create-kafka-channel-yaml_{context}"]
77
= Creating a Kafka channel by using YAML

modules/serverless-event-delivery-component-behaviors.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
22
//
3-
// serverless/knative_eventing/serverless-event-delivery.adoc
3+
// serverless/develop/serverless-event-delivery.adoc
44

55
[id="serverless-event-delivery-component-behaviors_{context}"]
66
= Event delivery behavior for Knative Eventing channels

modules/serverless-install-kafka-odc.adoc

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,25 @@ spec:
2121
bootstrapServers: <bootstrap_servers> <2>
2222
source:
2323
enabled: true <3>
24+
broker:
25+
enabled: true <4>
26+
defaultConfig:
27+
bootstrapServers: <bootstrap_servers> <5>
28+
numPartitions: <num_partitions> <6>
29+
replicationFactor: <replication_factor> <7>
2430
----
2531
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
2632
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
2733
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
34+
<4> Enables developers to use the Knative Kafka broker implementation in the cluster.
35+
<5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
36+
<6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`.
37+
<7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`.
38+
+
39+
[NOTE]
40+
====
41+
The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.
42+
====
2843

2944
.Prerequisites
3045

@@ -42,7 +57,7 @@ spec:
4257
+
4358
[IMPORTANT]
4459
====
45-
To use the Kafka channel or Kafka source on your cluster, you must toggle the *Enable* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, you must specify the Boostrap Servers.
60+
To use the Kafka channel, source, or broker on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel or broker, you must specify the bootstrap servers.
4661
====
4762
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
4863
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
// Module is included in the following assemblies:
2+
//
3+
// * serverless/admin_guide/serverless-kafka-admin.adoc
4+
5+
[id="serverless-kafka-broker-sasl-default-config_{context}"]
6+
= Configuring SASL authentication for Kafka brokers
7+
8+
As a cluster administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9+
10+
.Prerequisites
11+
12+
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
13+
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
14+
* You have a username and password for a Kafka cluster.
15+
* You have chosen the SASL mechanism to use, for example `PLAIN`, `SCRAM-SHA-256`, or `SCRAM-SHA-512`.
16+
* If TLS is enabled, you also need the `ca.crt` certificate file for the Kafka cluster.
17+
18+
[NOTE]
19+
====
20+
It is recommended to enable TLS in addition to SASL.
21+
====
22+
23+
.Procedure
24+
25+
. Create the certificate files as a secret in the `knative-eventing` namespace:
26+
+
27+
[source,terminal]
28+
----
29+
$ oc create secret -n knative-eventing generic <secret_name> \
30+
--from-literal=protocol=SASL_SSL \
31+
--from-literal=sasl.mechanism=<sasl_mechanism> \
32+
--from-file=ca.crt=caroot.pem \
33+
--from-literal=password="SecretPassword" \
34+
--from-literal=user="my-sasl-user"
35+
----
36+
+
37+
[IMPORTANT]
38+
====
39+
Use the key names `ca.crt`, `password`, and `sasl.mechanism`. Do not change them.
40+
====
41+
42+
. Edit the `KnativeKafka` CR and add a reference to your secret in the `broker` spec:
43+
+
44+
[source,yaml]
45+
----
46+
apiVersion: operator.serverless.openshift.io/v1alpha1
47+
kind: KnativeKafka
48+
metadata:
49+
namespace: knative-eventing
50+
name: knative-kafka
51+
spec:
52+
broker:
53+
enabled: true
54+
defaultConfig:
55+
authSecretName: <secret_name>
56+
...
57+
----
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
// Module is included in the following assemblies:
2+
//
3+
// * serverless/admin_guide/serverless-kafka-admin.adoc
4+
5+
[id="serverless-kafka-broker-tls-default-config_{context}"]
6+
= Configuring TLS authentication for Kafka brokers
7+
8+
As a cluster administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9+
10+
.Prerequisites
11+
12+
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
13+
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
14+
* You have a Kafka cluster CA certificate stored as a `.pem` file.
15+
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
16+
17+
.Procedure
18+
19+
. Create the certificate files as a secret in the `knative-eventing` namespace:
20+
+
21+
[source,terminal]
22+
----
23+
$ oc create secret -n knative-eventing generic <secret_name> \
24+
--from-literal=protocol=SSL \
25+
--from-file=ca.crt=caroot.pem \
26+
--from-file=user.crt=certificate.pem \
27+
--from-file=user.key=key.pem
28+
----
29+
+
30+
[IMPORTANT]
31+
====
32+
Use the key names `ca.crt`, `user.crt`, and `user.key`. Do not change them.
33+
====
34+
35+
. Edit the `KnativeKafka` CR and add a reference to your secret in the `broker` spec:
36+
+
37+
[source,yaml]
38+
----
39+
apiVersion: operator.serverless.openshift.io/v1alpha1
40+
kind: KnativeKafka
41+
metadata:
42+
namespace: knative-eventing
43+
name: knative-kafka
44+
spec:
45+
broker:
46+
enabled: true
47+
defaultConfig:
48+
authSecretName: <secret_name>
49+
...
50+
----
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * serverless/develop/serverless-kafka-developer.adoc
4+
// * serverless/develop/serverless-using-brokers.adoc
5+
6+
[id="serverless-kafka-broker_{context}"]
7+
= Creating a Kafka broker
8+
9+
.Prerequisites
10+
11+
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your {product-title} cluster.
12+
13+
.Procedure
14+
15+
. Create a Kafka-based broker as a YAML file:
16+
+
17+
[source,yaml]
18+
----
19+
apiVersion: eventing.knative.dev/v1
20+
kind: Broker
21+
metadata:
22+
annotations:
23+
eventing.knative.dev/broker.class: Kafka <1>
24+
name: example-kafka-broker
25+
spec:
26+
config:
27+
apiVersion: v1
28+
kind: ConfigMap
29+
name: kafka-broker-config <2>
30+
namespace: knative-eventing
31+
----
32+
<1> The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be `Kafka`.
33+
<2> The default config map for Knative Kafka brokers. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator.
34+
35+
. Apply the Kafka-based broker YAML file:
36+
+
37+
[source,terminal]
38+
----
39+
$ oc apply -f <filename>
40+
----

modules/serverless-kafka-event-delivery.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * serverless/develop/serverless-kafka-developer.adoc
44

55
[id="serverless-kafka-delivery-retries_{context}"]
6-
= Event delivery and retries
6+
= Kafka event delivery and retries
77

88
Using Kafka components in an event-driven architecture provides "at least once" event delivery. This means that operations are retried until a return code value is received. This makes applications more resilient to lost events; however, it might result in duplicate events being sent.
99

0 commit comments

Comments
 (0)