You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* jc-1663 update connectors prereq
* jc-1663 edits for peer review
* jc-1663 more edits for peer review
* jc-1663 again more edits for peer review
* jc-1663 update attributes and other fixes
* jc-1663 fixing use of attributes
* jc-1663 fixing extra spaces
@@ -109,52 +113,61 @@ A *sink* connector allows you to send data from {product-kafka} to an external s
109
113
====
110
114
endif::[]
111
115
112
-
ifndef::qs[]
113
-
== Overview
114
116
115
-
{product-long-kafka} is a cloud service that simplifies the process of running Apache Kafka. Apache Kafka is an open-source, distributed, publish-subscribe messaging system for creating fault-tolerant, real-time data feeds.
== Verifying the prerequisites for using {product-long-connectors}
116
119
117
-
You can use {product-long-connectors} to configure communication between {product-kafka} instances and external services and applications. {product-long-connectors} allow you to configure how data moves from one endpoint to another without writing code.
120
+
[role="_abstract"]
118
121
119
-
The following diagram illustrates how data flows from a data source through a data source connector to a Kafka topic. And how data flows from a Kafka topic to a data sink through a data sink connector.
122
+
Before you use {product-connectors}, you must complete the following prerequisites:
120
123
121
-
[.screencapture]
122
-
.{product-long-connectors} data flow
123
-
image::connectors-diagram.png[Illustration of data flow from data source through Kafka to data sink]
124
+
* Determine which {openshift} environment to use for deploying your {product-connectors} instances.
124
125
125
-
endif::[]
126
+
* Configure {product-long-kafka} for use with {product-connectors}.
== Verifying that you have the prerequisites for using {product-long-connectors}
128
+
*Determining which {openshift} environment to use for deploying your {connectors} instances*
129
129
130
-
[role="_abstract"]
131
-
ifdef::qs[]
132
-
Before you can use {product-connectors}, you must complete the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
130
+
For Service Preview, you have two choices:
131
+
132
+
* *The hosted evaluation environment*
133
+
134
+
** The {connectors} instances are hosted on a multitenant {openshift-dedicated} cluster that is owned by Red Hat.
135
+
** You can create four {connectors} instances at a time.
136
+
** The evaluation environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview evaluation guidelines^].
137
+
138
+
* *Your own trial environment*
139
+
140
+
** You have access to your own {openshift-dedicated} trial environment.
141
+
** You can create an unlimited number of {connectors} instances.
142
+
** Your {openshift-dedicated} trial cluster expires after 60 days.
143
+
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift-dedicated} trial cluster^].
144
+
145
+
*Configuring {product-long-kafka} for use with {product-connectors}*
133
146
134
-
* A *Kafka instance* that you can use for {product-connectors}.
135
-
* A *Kafka topic* to store messages sent by data sources and make the messages available to data sinks.
136
-
* A *service account* that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
137
-
* *Access rules* for the service account that defines how your {connectors} instances can access and use the topics in your Kafka instance.
138
-
endif::[]
139
147
ifndef::qs[]
140
-
Before you can use {product-connectors}, you must complete the steps in _{base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]_ to set up the following components:
148
+
Complete the steps in _{base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]_ to set up the following components:
149
+
endif::[]
150
+
151
+
ifdef::qs[]
152
+
Complete the steps in the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
153
+
endif::[]
141
154
142
155
* A *Kafka instance* that you can use for {product-connectors}.
143
156
* A *Kafka topic* to store messages sent by data sources and make the messages available to data sinks.
144
157
* A *service account* that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
145
-
* *Access rules* for the service account that defines how your {connectors} instances can access and use the topics in your Kafka instance.
146
-
endif::[]
158
+
* *Access rules* for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
147
159
148
160
ifdef::qs[]
149
161
.Procedure
150
162
Make sure that you have set up the prerequisite components.
151
163
152
164
.Verification
153
-
* Is the Kafka instance listed in the Kafka instances table and is it in the *Ready* state?
154
-
* Did you verify that your service account was successfully created in the *Service Accounts* page?
165
+
* Is the Kafka instance listed in the Kafka instances table and is the Kafka instance in the *Ready* state?
166
+
* Is your service account created in the *Service Accounts* page?
155
167
* Did you save your service account credentials to a secure location?
156
168
* Are the permissions for your service account listed in the *Access* page of the Kafka instance?
157
-
* Is the Kafka topic that you created for {product-connectors} listed in the topics table of the Kafka instance?
169
+
* Is the Kafka topic that you created for {connectors} listed in the topics table of the Kafka instance?
170
+
* If you plan to use a 60-day {openshift-dedicated} trial cluster to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your trial cluster?
158
171
159
172
endif::[]
160
173
@@ -165,6 +178,7 @@ ifndef::qs[]
165
178
* Verify that you saved your service account credentials to a secure location.
166
179
* Verify that the permissions for your service account are listed in the *Access* page of the Kafka instance.
167
180
* Verify that the Kafka topic that you created for {product-connectors} is listed in the Kafka instance's topics table.
181
+
* If you plan to use a 60-day {openshift-dedicated} trial cluster to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your trial cluster.
168
182
169
183
endif::[]
170
184
@@ -187,7 +201,7 @@ ifndef::qs[]
187
201
endif::[]
188
202
189
203
.Procedure
190
-
. In the {product-long-connectors} web console, select *Connectors* and then click *Create {connectors} instance*.
204
+
. In the {product-long-connectors} web console, select *{connectors}* and then click *Create {connectors} instance*.
191
205
. Select the connector that you want to use for connecting to a data source.
192
206
+
193
207
You can browse through the catalog of available connectors. You can also search for a particular connector by name, and filter for sink or source connectors.
@@ -198,11 +212,11 @@ Click the card to select the connector, and then click *Next*.
198
212
199
213
. For *Kafka instance*, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
200
214
201
-
. On the *Namespace* page, the namespace that you select depends on your OpenShift Dedicated environment. The namespace is the deployment space that hosts your {connectors} instances.
215
+
. On the *Namespace* page, the namespace that you select depends on your {openshift-dedicated} environment. The namespace is the deployment space that hosts your {connectors} instances.
202
216
+
203
-
If you're using a trial cluster in your own OpenShift Dedicated environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat OpenShift Connectors add-on to your OpenShift Dedicated trial cluster^].
217
+
If you're using a trial cluster in your own {openshift-dedicated} environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift-dedicated} trial cluster^].
204
218
+
205
-
If you're using the evaluation OpenShift Dedicated environment, click *Register eval namespace* to provision a namespace for hosting the {connectors} instances that you create.
219
+
If you're using the evaluation {openshift-dedicated} environment, click *Register eval namespace* to provision a namespace for hosting the {connectors} instances that you create.
206
220
207
221
. Click *Next*.
208
222
@@ -239,10 +253,10 @@ ifdef::qs[]
239
253
* Does your source {connectors} instance generate messages?
240
254
endif::[]
241
255
ifndef::qs[]
242
-
* Verify that your source {connectors} instance generate messages.
256
+
* Verify that your source {connectors} instance generates messages.
243
257
endif::[]
244
258
245
-
.. In the OpenShift Application Services web console, select *Streams for Apache Kafka* > *Kafka Instances*.
259
+
.. In the {product-long-rhoas} web console, select *Streams for Apache Kafka* > *Kafka Instances*.
246
260
.. Click the Kafka instance that you created for connectors.
247
261
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance.
248
262
.. Click the *Messages* tab to see a list of `Hello World!` messages.
@@ -275,11 +289,11 @@ endif::[]
275
289
+
276
290
For example, select *test* and then click *Next*.
277
291
278
-
. On the *Namespace* page, the namespace that you select depends on your OpenShift Dedicated environment. The namespace is the deployment space that hosts your {connectors} instances.
292
+
. On the *Namespace* page, the namespace that you select depends on your {openshift-dedicated} environment. The namespace is the deployment space that hosts your {connectors} instances.
279
293
+
280
-
If you're using a trial cluster on your own OpenShift Dedicated environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
294
+
If you're using a trial cluster on your own {openshift-dedicated} environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
281
295
+
282
-
If you're using the evaluation OpenShift Dedicated environment, click the *eval namespace* that you created when you created the source connector.
296
+
If you're using the evaluation {openshift-dedicated} environment, click the *eval namespace* that you created when you created the source connector.
283
297
284
298
. Click *Next*.
285
299
@@ -298,7 +312,7 @@ If you're using the evaluation OpenShift Dedicated environment, click the *eval
298
312
299
313
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
300
314
+
301
-
Your {connectors} instance is listed in the table of Connectors.
315
+
Your {connectors} instance is listed in the table of {connectors}.
302
316
+
303
317
After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state. It consumes messages from the associated Kafka topic and sends them to the data sink (for this example, the data sink is the HTTP URL that you provided).
304
318
@@ -309,7 +323,7 @@ ifdef::qs[]
309
323
endif::[]
310
324
311
325
ifndef::qs[]
312
-
* Verify that you see HTTP POST calls with `"Hello World!!"` messages by opening a web browser tab to your custom URL for the link:https://webhook.site[webhook.site^].
326
+
* Verify that you see HTTP POST calls with `"Hello World!!"` messages. Open a web browser tab to your custom URL for the link:https://webhook.site[webhook.site^].
Copy file name to clipboardExpand all lines: docs/connectors/getting-started-connectors/quickstart.yml
+2-1Lines changed: 2 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,10 @@ spec:
16
16
description: !snippet README.adoc#description
17
17
prerequisites:
18
18
- Complete the <a href="https://console.redhat.com/application-services/learning-resources?quickstart=getting-started">Getting started with OpenShift Streams for Apache Kafka</a> quick start.
19
+
- If you plan to use a 60-day OpenShift Dedicated trial cluster to deploy your Connectors instances, a cluster administrator must install the Connectors add-on as described in <a href="https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01">Adding the Red Hat OpenShift Connectors add-on to your OpenShift Dedicated trial cluster</a>.
0 commit comments