Skip to content

Commit 351bfef

Browse files
Merge pull request #26450 from abrennan89/SRVKE-562
SRVKE-562: Added Kafka install docs
2 parents 277a5d7 + 72080a5 commit 351bfef

File tree

5 files changed

+147
-1
lines changed

5 files changed

+147
-1
lines changed

_topic_map.yml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1631,7 +1631,7 @@ Topics:
16311631
- Name: Creating a multicomponent application with odo
16321632
File: creating-a-multicomponent-application-with-odo
16331633
- Name: Creating an application with a database
1634-
File: creating-an-application-with-a-database
1634+
File: creating-an-application-with-a-database
16351635
- Name: Using devfiles in odo
16361636
File: using-devfiles-in-odo
16371637
- Name: Working with storage
@@ -2554,6 +2554,9 @@ Topics:
25542554
File: serverless-apiserversource
25552555
- Name: Using PingSource
25562556
File: serverless-pingsource
2557+
# Knative Kafka
2558+
# - Name: Using Apache Kafka with OpenShift Serverless
2559+
# File: serverless-kafka
25572560
# Networking
25582561
- Name: Networking
25592562
Dir: networking

images/knative-kafka-overview.png

228 KB
Loading
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
// Module is included in the following assemblies:
2+
//
3+
// serverless/serverless-kafka.adoc
4+
5+
[id="serverless-install-kafka-odc_{context}"]
6+
= Installing Apache Kafka components using the web console
7+
8+
Cluster administrators can enable the use of Apache Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
9+
10+
.Prerequisites
11+
12+
* The {ServerlessOperatorName} and Knative Eventing are installed.
13+
* You have access to a Red Hat AMQ Streams cluster.
14+
* You have cluster administrator permissions on {product-title}.
15+
* You are logged in to the web console.
16+
17+
.Procedure
18+
19+
. In the *Administrator* perspective, navigate to *Operators* → *Installed Operators*.
20+
. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*.
21+
. Click *Knative Kafka* in the list of *Provided APIs* for the {ServerlessOperatorName} to go to the *Knative Kafka* tab.
22+
. Click *Create Knative Kafka*.
23+
. Optional: Configure the *KnativeKafka* object in the *Create Knative Kafka* page. To do so, use either the default form provided or edit the YAML.
24+
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
25+
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
26+
. Click *Create* after you have completed any of the optional configurations for Kafka. You are automatically directed to the *Knative Kafka* tab where *knative-kafka* is in the list of resources.
27+
28+
.Verification steps
29+
30+
. Click on the *knative-kafka* resource in the *Knative Kafka* tab. You are automatically directed to the *Knative Kafka Overview* page.
31+
. View the list of *Conditions* for the resource and confirm that they have a status of *True*.
32+
+
33+
image::knative-kafka-overview.png[Kafka Knative Overview page showing Conditions]
34+
+
35+
If the conditions have a status of *Unknown* or *False*, wait a few moments to refresh the page.
36+
. Check that the Knative Kafka resources have been created:
37+
+
38+
[source,terminal]
39+
----
40+
$ oc get pods -n knative-eventing
41+
----
42+
+
43+
.Example output
44+
[source,terminal]
45+
----
46+
NAME READY STATUS RESTARTS AGE
47+
kafka-ch-controller-5d85f5f779-kqvs4 1/1 Running 0 126m
48+
kafka-webhook-66bd8688d6-2grvf 1/1 Running 0 126m
49+
----
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
// Module is included in the following assemblies:
2+
//
3+
// serverless/serverless-kafka.adoc
4+
5+
[id="serverless-install-kafka-yaml_{context}"]
6+
= Installing Apache Kafka components using YAML
7+
8+
Cluster administrators can enable the use of Apache Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
9+
10+
.Prerequisites
11+
12+
* The {ServerlessOperatorName} and Knative Eventing are installed.
13+
* You have access to a Red Hat AMQ Streams cluster.
14+
* You have cluster administrator permissions on {product-title}.
15+
16+
.Procedure
17+
18+
. Create a YAML file that contains the following:
19+
+
20+
[source,yaml]
21+
----
22+
apiVersion: operator.serverless.openshift.io/v1alpha1
23+
kind: KnativeKafka
24+
metadata:
25+
name: knative-kafka
26+
namespace: knative-eventing
27+
spec:
28+
channel:
29+
enabled: true <1>
30+
bootstrapServers: <bootstrap_server> <2>
31+
source:
32+
enabled: true <3>
33+
----
34+
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
35+
<2> A comma-separated list of bootstrapped servers from your AMQ Streams cluster.
36+
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
37+
. Apply the YAML file:
38+
+
39+
[source,terminal]
40+
----
41+
$ oc apply -f <filename>
42+
----
43+
44+
.Verification steps
45+
46+
. Check that the Kafka installation has completed successfully by checking the installation status conditions. For example:
47+
+
48+
[source,terminal]
49+
----
50+
$ oc get knativekafka.operator.serverless.openshift.io/knative-kafka \
51+
-n knative-eventing \
52+
--template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
53+
----
54+
+
55+
.Example output
56+
[source,terminal]
57+
----
58+
DeploymentsAvailable=True
59+
InstallSucceeded=True
60+
Ready=True
61+
----
62+
+
63+
If the conditions have a status of `Unknown` or `False`, wait a few moments and then try again.
64+
. Check that the Knative Kafka resources have been created:
65+
+
66+
[source,terminal]
67+
----
68+
$ oc get pods -n knative-eventing
69+
----
70+
+
71+
.Example output
72+
[source,terminal]
73+
----
74+
NAME READY STATUS RESTARTS AGE
75+
kafka-ch-controller-5d85f5f779-kqvs4 1/1 Running 0 126m
76+
kafka-webhook-66bd8688d6-2grvf 1/1 Running 0 126m
77+
----

serverless/serverless-kafka.adoc

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
include::modules/serverless-document-attributes.adoc[]
2+
[id="serverless-kafka"]
3+
= Using Apache Kafka with {ServerlessProductName}
4+
include::modules/common-attributes.adoc[]
5+
:context: serverless-kafka
6+
7+
toc::[]
8+
9+
:FeatureName: Apache Kafka on {ServerlessProductName}
10+
include::modules/technology-preview.adoc[leveloffset=+2]
11+
12+
You can use the `KafkaChannel` channel type and `KafkaSource` event source with {ServerlessProductName}.
13+
To do this, you must install the Knative Kafka components, and configure the integration between {ServerlessProductName} and a supported link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/index[Red Hat AMQ Streams] cluster.
14+
15+
// Install Kafka
16+
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
17+
include::modules/serverless-install-kafka-yaml.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)