Skip to content

Commit 9eec0d4

Browse files
authored
Merge pull request #44552 from abrennan89/kafkaAbstracts
SRVCOM-1728: Updating Kafka abstracts for Jupiter guidelines
2 parents 150f06b + 7b01869 commit 9eec0d4

10 files changed

+63
-71
lines changed

modules/serverless-install-kafka-odc.adoc

Lines changed: 39 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,41 @@
66
[id="serverless-install-kafka-odc_{context}"]
77
= Installing Knative Kafka
88

9-
The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource:
9+
Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Knative Kafka functionality is available in an {ServerlessProductName} installation if you have installed the `KnativeKafka` custom resource.
1010

11+
.Prerequisites
12+
13+
* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster.
14+
* You have access to a Red Hat AMQ Streams cluster.
15+
* Install the OpenShift CLI (`oc`) if you want to use the verification steps.
16+
17+
// OCP
18+
ifdef::openshift-enterprise[]
19+
* You have cluster administrator permissions on {product-title}.
20+
endif::[]
21+
22+
// OSD
23+
ifdef::openshift-dedicated[]
24+
* You have cluster or dedicated administrator permissions on {product-title}.
25+
endif::[]
26+
27+
* You are logged in to the {product-title} web console.
28+
29+
.Procedure
30+
31+
. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*.
32+
33+
. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*.
34+
35+
. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*.
36+
37+
. Configure the *KnativeKafka* object in the *Create Knative Kafka* page.
38+
+
39+
[IMPORTANT]
40+
====
41+
To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers.
42+
====
43+
+
1144
.Example `KnativeKafka` custom resource
1245
[source,yaml]
1346
----
@@ -38,54 +71,29 @@ spec:
3871
<5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
3972
<6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`.
4073
<7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`.
74+
<8> Enables developers to use a Kafka sink in the cluster.
4175
+
4276
[NOTE]
4377
====
4478
The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.
4579
====
46-
<8> Enables developers to use a Kafka sink in the cluster.
47-
48-
.Prerequisites
49-
50-
* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster.
51-
* You have access to a Red Hat AMQ Streams cluster.
52-
* Install the OpenShift CLI (`oc`) if you want to use the verification steps.
53-
54-
// OCP
55-
ifdef::openshift-enterprise[]
56-
* You have cluster administrator permissions on {product-title}.
57-
endif::[]
58-
59-
// OSD
60-
ifdef::openshift-dedicated[]
61-
* You have cluster or dedicated administrator permissions on {product-title}.
62-
endif::[]
6380

64-
* You are logged in to the {product-title} web console.
65-
66-
.Procedure
67-
68-
. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*.
69-
. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*.
70-
. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*.
71-
. Configure the *KnativeKafka* object in the *Create Knative Kafka* page.
72-
+
73-
[IMPORTANT]
74-
====
75-
To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker or sink, you must specify the bootstrap servers.
76-
====
7781
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
82+
7883
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
84+
7985
. Click *Create* after you have completed any of the optional configurations for Kafka. You are automatically directed to the *Knative Kafka* tab where *knative-kafka* is in the list of resources.
8086

8187
.Verification
8288

8389
. Click on the *knative-kafka* resource in the *Knative Kafka* tab. You are automatically directed to the *Knative Kafka Overview* page.
90+
8491
. View the list of *Conditions* for the resource and confirm that they have a status of *True*.
8592
+
8693
image::knative-kafka-overview.png[Kafka Knative Overview page showing Conditions]
8794
+
8895
If the conditions have a status of *Unknown* or *False*, wait a few moments to refresh the page.
96+
8997
. Check that the Knative Kafka resources have been created:
9098
+
9199
[source,terminal]

modules/serverless-kafka-broker-sasl-default-config.adoc

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,17 @@
66
[id="serverless-kafka-broker-sasl-default-config_{context}"]
77
= Configuring SASL authentication for Kafka brokers
88

9-
// OCP
10-
ifdef::openshift-enterprise[]
11-
As a cluster administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9+
_Simple Authentication and Security Layer_ (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. You can set up SASL for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
1210

1311
.Prerequisites
1412

13+
// OCP
14+
ifdef::openshift-enterprise[]
1515
* You have cluster administrator permissions on {product-title}.
1616
endif::[]
1717

1818
// OSD
1919
ifdef::openshift-dedicated[]
20-
As a cluster or dedicated administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
21-
22-
.Prerequisites
23-
2420
* You have cluster or dedicated administrator permissions on {product-title}.
2521
endif::[]
2622

modules/serverless-kafka-broker-tls-default-config.adoc

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,17 @@
66
[id="serverless-kafka-broker-tls-default-config_{context}"]
77
= Configuring TLS authentication for Kafka brokers
88

9-
// OCP
10-
ifdef::openshift-enterprise[]
11-
As a cluster administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
9+
_Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. You can set up TLS for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
1210

1311
.Prerequisites
1412

13+
// OCP
14+
ifdef::openshift-enterprise[]
1515
* You have cluster administrator permissions on {product-title}.
1616
endif::[]
1717

1818
// OSD
1919
ifdef::openshift-dedicated[]
20-
As a cluster or dedicated administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
21-
22-
.Prerequisites
23-
2420
* You have cluster or dedicated administrator permissions on {product-title}.
2521
endif::[]
2622

@@ -29,7 +25,7 @@ endif::[]
2925
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
3026
* You have a Kafka cluster CA certificate stored as a `.pem` file.
3127
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
32-
* Install the OpenShift CLI (`oc`).
28+
* Install the OpenShift (`oc`) CLI.
3329
3430
.Procedure
3531

modules/serverless-kafka-broker.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[id="serverless-kafka-broker_{context}"]
88
= Creating a Kafka broker by using YAML
99

10-
You can create a Kafka broker by using YAML files.
10+
Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a `Broker` object, then apply it by using the `oc apply` command.
1111

1212
.Prerequisites
1313

modules/serverless-kafka-sink.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@
66
[id="serverless-kafka-sink_{context}"]
77
= Using a Kafka sink
88

9-
You can create an event sink called a Kafka sink, that sends events to a Kafka topic. To do this, you must create a `KafkaSink` object. The following procedure explains how you can create a `KafkaSink` object by using YAML files and the `oc` CLI.
9+
You can create an event sink called a Kafka sink that sends events to a Kafka topic. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka sink by using YAML, you must create a YAML file that defines a `KafkaSink` object, then apply it by using the `oc apply` command.
1010

1111
.Prerequisites
1212

1313
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your cluster.
1414
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
1515
* You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
16-
* Install the OpenShift CLI (`oc`).
16+
* Install the OpenShift (`oc`) CLI.
1717
1818
.Procedure
1919

modules/serverless-kafka-source-kn.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,15 @@
77
[id="serverless-kafka-source-kn_{context}"]
88
= Creating a Kafka event source by using the Knative CLI
99

10-
This section describes how to create a Kafka event source by using the `kn` command.
10+
You can use the `kn source kafka create` command to create a Kafka source by using the `kn` CLI. Using the `kn` CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly.
1111

1212
.Prerequisites
1313

1414
* The {ServerlessOperatorName}, Knative Eventing, Knative Serving, and the `KnativeKafka` custom resource (CR) are installed on your cluster.
1515
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
1616
* You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
17-
* You have installed the `kn` CLI.
17+
* You have installed the Knative (`kn`) CLI.
18+
* Optional: You have installed the OpenShift (`oc`) CLI if you want to use the verification steps in this procedure.
1819
1920
.Procedure
2021

modules/serverless-kafka-source-odc.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="serverless-kafka-source-odc_{context}"]
77
= Creating a Kafka event source by using the web console
88

9-
You can create and verify a Kafka event source from the {product-title} web console.
9+
After Knative Kafka is installed on your cluster, you can create a Kafka source by using the web console. Using the {product-title} web console provides a streamlined and intuitive user interface to create a Kafka source.
1010

1111
.Prerequisites
1212

modules/serverless-kafka-source-yaml.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="serverless-kafka-source-yaml_{context}"]
77
= Creating a Kafka event source by using YAML
88

9-
You can create a Kafka event source by using YAML.
9+
Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka source by using YAML, you must create a YAML file that defines a `KafkaSource` object, then apply it by using the `oc apply` command.
1010

1111
.Prerequisites
1212

serverless/admin_guide/serverless-kafka-admin.adoc

Lines changed: 4 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9+
Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Kafka provides options for event source, channel, broker, and event sink capabilities.
10+
911
// OCP
1012
ifdef::openshift-enterprise[]
1113
In addition to the Knative Eventing components that are provided as part of a core {ServerlessProductName} installation, cluster administrators can install the `KnativeKafka` custom resource (CR).
@@ -30,21 +32,8 @@ The `KnativeKafka` CR provides users with additional options, such as:
3032
3133
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
3234

33-
[id="serverless-kafka-admin-default-configs"]
34-
== Configuring default settings for Kafka components
35-
36-
// OCP
37-
ifdef::openshift-enterprise[]
38-
If you have cluster administrator permissions, you can set default options for Knative Kafka components, either for the whole cluster or for a specific namespace.
39-
endif::[]
40-
41-
// OSD
42-
ifdef::openshift-dedicated[]
43-
If you have cluster or dedicated administrator permissions, you can set default options for Knative Kafka components, either for the whole cluster or for a specific namespace.
44-
endif::[]
45-
46-
include::modules/serverless-kafka-broker-tls-default-config.adoc[leveloffset=+2]
47-
include::modules/serverless-kafka-broker-sasl-default-config.adoc[leveloffset=+2]
35+
include::modules/serverless-kafka-broker-tls-default-config.adoc[leveloffset=+1]
36+
include::modules/serverless-kafka-broker-sasl-default-config.adoc[leveloffset=+1]
4837

4938
[id="additional-resources_serverless-kafka-admin"]
5039
[role="_additional-resources"]

serverless/develop/serverless-kafka-developer.adoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9+
Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Kafka provides options for event source, channel, broker, and event sink capabilities.
10+
911
Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-install-kafka-odc_serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource].
1012

1113
// OCP
@@ -30,7 +32,7 @@ See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-
3032
[id="serverless-kafka-developer-source"]
3133
== Kafka source
3234

33-
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink.
35+
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift (`oc`) CLI to apply it.
3436

3537
// dev console
3638
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2]
@@ -61,11 +63,11 @@ include::modules/serverless-create-kafka-channel-yaml.adoc[leveloffset=+1]
6163
[id="serverless-kafka-developer-sink"]
6264
== Kafka sink
6365

66+
Kafka sinks are a type of xref:../../serverless/develop/serverless-event-sinks.adoc#serverless-event-sinks[event sink] that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to a Kafka topic by using a Kafka sink.
67+
6468
:FeatureName: Kafka sink
6569
include::snippets/technology-preview.adoc[leveloffset=+2]
6670

67-
Kafka sinks are a type of xref:../../serverless/develop/serverless-event-sinks.adoc#serverless-event-sinks[event sink] that are available if a cluster administrator has enabled Kafka on your cluster. You can send events directly from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to a Kafka topic by using a Kafka sink.
68-
6971
// Kafka sink
7072
include::modules/serverless-kafka-sink.adoc[leveloffset=+2]
7173

0 commit comments

Comments
 (0)