You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -95,6 +98,12 @@ In this example, you connect a data source (a data generator) that creates Kafka
95
98
96
99
// Condition out QS-only content so that it doesn't appear in docs.
97
100
// All QS anchor IDs must be in this alternate anchor ID format `[#anchor-id]` because the ascii splitter relies on the other format `[id="anchor-id"]` to generate module files.
101
+
102
+
ifndef::qs[]
103
+
.Example flow of messages from a data source to a data sink
104
+
image::{imagesdir}/connectors-getting-started-connectors/connectors-example-diagram.png[Image of data flowing from a data source to a data sink]
105
+
endif::[]
106
+
98
107
ifdef::qs[]
99
108
[#description]
100
109
====
@@ -121,26 +130,32 @@ endif::[]
121
130
122
131
Before you use {product-connectors}, you must complete the following prerequisites:
123
132
124
-
* Determine which {openshift} environment to use for deploying your {product-connectors} instances.
133
+
* Determine which {openshift} environment to use for your _{connectors} namespace_. The {connectors} namespace is where your {product-connectors} instances are deployed.
125
134
126
135
* Configure {product-long-kafka} for use with {product-connectors}.
127
136
128
-
*Determining which {openshift} environment to use for deploying your {connectors} instances*
137
+
*Determining which {openshift} environment to use for your {connectors} namespace*
129
138
130
-
For Service Preview, you have two choices:
139
+
You have three choices:
131
140
132
141
* *The hosted preview environment*
133
142
134
143
** The {connectors} instances are hosted on a multitenant {osd-name-short} cluster that is owned by Red Hat.
135
144
** You can create four {connectors} instances at a time.
136
-
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview guidelines^].
145
+
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Preview guidelines^].
137
146
138
-
* *Your own trial environment*
147
+
* *Your own {osd-name} Trial environment*
139
148
140
-
** You have access to your own {osd-name-short} trial environment.
149
+
** You have access to your own {osd-name} Trial environment.
141
150
** You can create an unlimited number of {connectors} instances.
142
-
** Your {osd-name-short} trial cluster expires after 60 days.
143
-
** A cluster administrator has installed the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
151
+
** Your {osd-name-short} Trial cluster expires after 60 days.
152
+
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
153
+
154
+
* *Your own {rosa-name} cluster*
155
+
156
+
** You have access to your own {rosa-name} (ROSA) environment.
157
+
** You can create {connectors} instances depending on your subscription, as described in https://access.redhat.com/articles/6990631[Red Hat OpenShift Connectors Tiers^].
158
+
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
144
159
145
160
*Configuring {product-long-kafka} for use with {product-connectors}*
146
161
@@ -152,8 +167,8 @@ ifdef::qs[]
152
167
Complete the steps in the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
153
168
endif::[]
154
169
155
-
* A _Kafka instance_ that you can use for {product-connectors}.
156
-
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks.
170
+
* A _Kafka instance_ that you can use for {product-connectors}. For this example, the Kafka instance is `test-connect`.
171
+
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks. For this example, the Kafka topic is `test-topic`.
157
172
* A _service account_ that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158
173
* _Access rules_ for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
159
174
@@ -167,7 +182,7 @@ Make sure that you have set up the prerequisite components.
167
182
* Did you save your service account credentials to a secure location?
168
183
* Are the permissions for your service account listed on the *Access* page of the Kafka instance?
169
184
* Is the Kafka topic that you created for {connectors} listed on the *Topics* page of the Kafka instance?
170
-
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your trial cluster?
185
+
* If you plan to use your own {openshift} cluster ({osd-name-short} Trial or ROSA) to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your Trial cluster?
171
186
172
187
endif::[]
173
188
@@ -178,7 +193,7 @@ ifndef::qs[]
178
193
* Verify that you saved your service account credentials to a secure location.
179
194
* Verify that the permissions for your service account are listed on the *Access* page of the Kafka instance.
180
195
* Verify that the Kafka topic that you created for {product-connectors} is listed on the *Topics* page of the Kafka instance.
181
-
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your trial cluster.
196
+
* If you plan to use your own {openshift} cluster ({osd-name-short} Trial or ROSA) to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your Trial cluster.
182
197
183
198
endif::[]
184
199
@@ -193,8 +208,11 @@ You configure your {connectors} instance to listen for events from the data sour
193
208
194
209
For this example, you create an instance of the Data Generator source connector. The Data Generator is provided for development and testing purposes. You specify the text for a message and how often to send the message.
195
210
196
-
ifndef::qs[]
197
211
.Prerequisites
212
+
213
+
* If you want to use a dead letter queue (DLQ) to handle any messaging errors, create a Kafka topic for the DLQ.
214
+
215
+
ifndef::qs[]
198
216
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
199
217
endif::[]
200
218
@@ -208,15 +226,17 @@ For example, to find the Data Generator source connector, type `data` in the sea
208
226
+
209
227
Click the card to select the connector, and then click *Next*.
210
228
211
-
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
229
+
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}. For example, click the *test-connect* card.
230
+
+
231
+
Click *Next*.
212
232
213
-
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
233
+
. On the *Deployment* page, the namespace that you select depends on your {openshift} environment.
214
234
+
215
-
If you're using a trial cluster in your own {osd-name-short} environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
235
+
If you're using your own {openshift} environment, select the card for the namespace that was created when a cluster administrator added the {connectors} service to your cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
216
236
+
217
237
If you're using the hosted preview environment, click *Create preview namespace* to provision a namespace for hosting the {connectors} instances that you create.
218
-
219
-
. Click *Next*.
238
+
+
239
+
Click *Next*.
220
240
221
241
. Specify the core configuration for your {connectors} instance:
222
242
.. Type a name for your {connectors} instance. For example, type `hello world generator`.
@@ -227,16 +247,16 @@ If you're using the hosted preview environment, click *Create preview namespace*
227
247
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!!`.
228
248
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, to send a message every 10 seconds, specify `10000`.
229
249
.. *Data Shape Produces Format*: Accept the default, `application/octet-stream`.
250
+
+
251
+
Click *Next*.
230
252
231
-
. Click *Next*.
232
-
233
-
. Select one of the following error handling policy for your {connectors} instance:
253
+
. Select one of the following error handling policies for your {connectors} instance:
234
254
+
235
-
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to *Failed* state. You can view the error message.
255
+
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to the *Failed* state. You can view the error message.
236
256
* *Ignore*: If a message fails to send, the {connectors} instance ignores the error and continues to run. No error message is logged.
237
-
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you specify.
238
-
239
-
. Click *Next*.
257
+
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you created for the DLQ.
258
+
+
259
+
Click *Next*.
240
260
241
261
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
242
262
+
@@ -253,8 +273,8 @@ ifndef::qs[]
253
273
endif::[]
254
274
255
275
.. In the {product-long-rhoas} web console, select *Streams for Apache Kafka* > *Kafka Instances*.
256
-
.. Click the Kafka instance that you created for connectors.
257
-
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance.
276
+
.. Click the Kafka instance that you created for connectors. For example, click *test-connect*.
277
+
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance. For example, click *test-topic*.
258
278
.. Click the *Messages* tab to see a list of `Hello World!!` messages.
259
279
260
280
@@ -273,7 +293,7 @@ ifndef::qs[]
273
293
endif::[]
274
294
* You created a Data Generator source {connectors} instance.
275
295
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
276
-
296
+
* If you want to use a dead letter queue (DLQ) to handle any messaging errors, create a Kafka topic for the DLQ.
277
297
278
298
.Procedure
279
299
@@ -283,17 +303,17 @@ endif::[]
283
303
.. For example, type `http` in the search field. The list of {connectors} is filtered to show the *HTTP sink* connector.
284
304
.. Click the *HTTP sink* card and then click *Next*.
285
305
286
-
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with.
306
+
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with. For example, select *test-connect*.
287
307
+
288
-
For example, select *test* and then click *Next*.
308
+
Click *Next*.
289
309
290
-
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
310
+
. On the *Deployment* page, the namespace that you select depends on your {openshift} environment.
291
311
+
292
-
If you're using a trial cluster on your own {osd-name-short} environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
312
+
If you're using your own {openshift} environment, select the card for the namespace that was created when a cluster administrator added the {connectors} service to your cluster.
293
313
+
294
314
If you're using the hosted preview environment, click the *preview namespace* that you provisioned when you created the source connector.
295
-
296
-
. Click *Next*.
315
+
+
316
+
Click *Next*.
297
317
298
318
. Provide the core configuration for your connector:
299
319
.. Type a unique name for the connector. For example, type `hello world receiver`.
@@ -304,10 +324,12 @@ If you're using the hosted preview environment, click the *preview namespace* th
304
324
.. *Method*: Accept the default, `POST`.
305
325
.. *URL*: Type your unique URL from the link:https://webhook.site[webhook.site^].
306
326
.. *Data Shape Consumes Format*: Accept the default, `application/octet-stream`.
327
+
+
328
+
Click *Next*.
307
329
308
-
. Click *Next*.
309
-
310
-
. Select an error handling policy for your {connectors} instance. For example, select *Stop* and then click *Next*.
330
+
. Select an error handling policy for your {connectors} instance. For example, select *Stop*.
331
+
+
332
+
Click *Next*.
311
333
312
334
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
Copy file name to clipboardExpand all lines: docs/connectors/getting-started-connectors/quickstart.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ spec:
16
16
description: !snippet README.adoc#description
17
17
prerequisites:
18
18
- Complete the <a href="https://console.redhat.com/application-services/learning-resources?quickstart=getting-started">Getting started with OpenShift Streams for Apache Kafka</a> quick start.
19
-
- If you plan to use a 60-day OpenShift Dedicated trial cluster to deploy your Connectors instances, a cluster administrator must install the Connectors add-on as described in <a href="https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01">Adding the Red Hat OpenShift Connectors add-on to your OpenShift Dedicated trial cluster</a>.
19
+
- If you plan to use your own OpenShift cluster to deploy your Connectors instances, a cluster administrator must install the Connectors add-on as described in <a href="https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01">Adding the Red Hat OpenShift Connectors add-on to your OpenShift cluster</a>.
0 commit comments