|
6 | 6 | [id="serverless-install-kafka-odc_{context}"] |
7 | 7 | = Installing Knative Kafka |
8 | 8 |
|
9 | | -The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource: |
| 9 | +Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with {ServerlessProductName}. Knative Kafka functionality is available in an {ServerlessProductName} installation if you have installed the `KnativeKafka` custom resource. |
10 | 10 |
|
| 11 | +.Prerequisites |
| 12 | + |
| 13 | +* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster. |
| 14 | +* You have access to a Red Hat AMQ Streams cluster. |
| 15 | +* Install the OpenShift CLI (`oc`) if you want to use the verification steps. |
| 16 | +
|
| 17 | +// OCP |
| 18 | +ifdef::openshift-enterprise[] |
| 19 | +* You have cluster administrator permissions on {product-title}. |
| 20 | +endif::[] |
| 21 | + |
| 22 | +// OSD |
| 23 | +ifdef::openshift-dedicated[] |
| 24 | +* You have cluster or dedicated administrator permissions on {product-title}. |
| 25 | +endif::[] |
| 26 | + |
| 27 | +* You are logged in to the {product-title} web console. |
| 28 | +
|
| 29 | +.Procedure |
| 30 | + |
| 31 | +. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*. |
| 32 | + |
| 33 | +. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*. |
| 34 | + |
| 35 | +. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*. |
| 36 | + |
| 37 | +. Configure the *KnativeKafka* object in the *Create Knative Kafka* page. |
| 38 | ++ |
| 39 | +[IMPORTANT] |
| 40 | +==== |
| 41 | +To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers. |
| 42 | +==== |
| 43 | ++ |
11 | 44 | .Example `KnativeKafka` custom resource |
12 | 45 | [source,yaml] |
13 | 46 | ---- |
@@ -38,54 +71,29 @@ spec: |
38 | 71 | <5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster. |
39 | 72 | <6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`. |
40 | 73 | <7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`. |
| 74 | +<8> Enables developers to use a Kafka sink in the cluster. |
41 | 75 | + |
42 | 76 | [NOTE] |
43 | 77 | ==== |
44 | 78 | The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster. |
45 | 79 | ==== |
46 | | -<8> Enables developers to use a Kafka sink in the cluster. |
47 | | - |
48 | | -.Prerequisites |
49 | | - |
50 | | -* You have installed the {ServerlessOperatorName} and Knative Eventing on your cluster. |
51 | | -* You have access to a Red Hat AMQ Streams cluster. |
52 | | -* Install the OpenShift CLI (`oc`) if you want to use the verification steps. |
53 | | -
|
54 | | -// OCP |
55 | | -ifdef::openshift-enterprise[] |
56 | | -* You have cluster administrator permissions on {product-title}. |
57 | | -endif::[] |
58 | | - |
59 | | -// OSD |
60 | | -ifdef::openshift-dedicated[] |
61 | | -* You have cluster or dedicated administrator permissions on {product-title}. |
62 | | -endif::[] |
63 | 80 |
|
64 | | -* You are logged in to the {product-title} web console. |
65 | | -
|
66 | | -.Procedure |
67 | | - |
68 | | -. In the *Administrator* perspective, navigate to *Operators* -> *Installed Operators*. |
69 | | -. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*. |
70 | | -. In the list of *Provided APIs* for the {ServerlessOperatorName}, find the *Knative Kafka* box and click *Create Instance*. |
71 | | -. Configure the *KnativeKafka* object in the *Create Knative Kafka* page. |
72 | | -+ |
73 | | -[IMPORTANT] |
74 | | -==== |
75 | | -To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, broker or sink, you must specify the bootstrap servers. |
76 | | -==== |
77 | 81 | .. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation. |
| 82 | + |
78 | 83 | .. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page. |
| 84 | + |
79 | 85 | . Click *Create* after you have completed any of the optional configurations for Kafka. You are automatically directed to the *Knative Kafka* tab where *knative-kafka* is in the list of resources. |
80 | 86 |
|
81 | 87 | .Verification |
82 | 88 |
|
83 | 89 | . Click on the *knative-kafka* resource in the *Knative Kafka* tab. You are automatically directed to the *Knative Kafka Overview* page. |
| 90 | + |
84 | 91 | . View the list of *Conditions* for the resource and confirm that they have a status of *True*. |
85 | 92 | + |
86 | 93 | image::knative-kafka-overview.png[Kafka Knative Overview page showing Conditions] |
87 | 94 | + |
88 | 95 | If the conditions have a status of *Unknown* or *False*, wait a few moments to refresh the page. |
| 96 | + |
89 | 97 | . Check that the Knative Kafka resources have been created: |
90 | 98 | + |
91 | 99 | [source,terminal] |
|
0 commit comments