You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/connectors/getting-started-connectors/README.adoc
+62-63Lines changed: 62 additions & 63 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -107,9 +107,9 @@ Welcome to the quick start for {product-long-connectors}.
107
107
108
108
In this quick start, you learn how to create a source connector and sink connector and send data to and from {product-kafka}.
109
109
110
-
A *source* connector allows you to send data from an external system to {product-kafka}.
110
+
A _source_ connector allows you to send data from an external system to {product-kafka}.
111
111
112
-
A *sink* connector allows you to send data from {product-kafka} to an external system.
112
+
A _sink_ connector allows you to send data from {product-kafka} to an external system.
113
113
====
114
114
endif::[]
115
115
@@ -129,55 +129,55 @@ Before you use {product-connectors}, you must complete the following prerequisit
129
129
130
130
For Service Preview, you have two choices:
131
131
132
-
* *The hosted evaluation environment*
132
+
* *The hosted preview environment*
133
133
134
134
** The {connectors} instances are hosted on a multitenant {osd-name-short} cluster that is owned by Red Hat.
135
135
** You can create four {connectors} instances at a time.
136
-
** The evaluation environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview evaluation guidelines^].
136
+
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview guidelines^].
137
137
138
138
* *Your own trial environment*
139
139
140
140
** You have access to your own {osd-name-short} trial environment.
141
141
** You can create an unlimited number of {connectors} instances.
142
142
** Your {osd-name-short} trial cluster expires after 60 days.
143
-
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
143
+
** A cluster administrator has installed the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
144
144
145
145
*Configuring {product-long-kafka} for use with {product-connectors}*
146
146
147
147
ifndef::qs[]
148
-
Complete the steps in _{base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]_ to set up the following components:
148
+
Complete the steps in {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^] to set up the following components:
149
149
endif::[]
150
150
151
151
ifdef::qs[]
152
152
Complete the steps in the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
153
153
endif::[]
154
154
155
-
* A *Kafka instance* that you can use for {product-connectors}.
156
-
* A *Kafka topic* to store messages sent by data sources and make the messages available to data sinks.
157
-
* A *service account* that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158
-
* *Access rules* for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
155
+
* A _Kafka instance_ that you can use for {product-connectors}.
156
+
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks.
157
+
* A _service account_ that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158
+
* _Access rules_ for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
159
159
160
160
ifdef::qs[]
161
161
.Procedure
162
162
Make sure that you have set up the prerequisite components.
163
163
164
164
.Verification
165
-
* Is the Kafka instance listed in the Kafka instances table and is the Kafka instance in the *Ready* state?
166
-
* Is your service account created in the *Service Accounts* page?
165
+
* Is the Kafka instance listed on the *Kafka Instances* page and is the Kafka instance in the *Ready* state?
166
+
* Is your service account created on the *Service Accounts* page?
167
167
* Did you save your service account credentials to a secure location?
168
-
* Are the permissions for your service account listed in the *Access* page of the Kafka instance?
169
-
* Is the Kafka topic that you created for {connectors} listed in the topics table of the Kafka instance?
168
+
* Are the permissions for your service account listed on the *Access* page of the Kafka instance?
169
+
* Is the Kafka topic that you created for {connectors} listed on the *Topics* page of the Kafka instance?
170
170
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your trial cluster?
171
171
172
172
endif::[]
173
173
174
174
ifndef::qs[]
175
175
.Verification
176
-
* Verify that the Kafka instance is listed in the Kafka instances table and that the state of the Kafka instance is shown as *Ready*.
177
-
* Verify that your service account was successfully created in the *Service Accounts* page.
176
+
* Verify that the Kafka instance is listed on the *Kafka Instances* page and that the state of the Kafka instance is shown as *Ready*.
177
+
* Verify that your service account was successfully created on the *Service Accounts* page.
178
178
* Verify that you saved your service account credentials to a secure location.
179
-
* Verify that the permissions for your service account are listed in the *Access* page of the Kafka instance.
180
-
* Verify that the Kafka topic that you created for {product-connectors} is listed in the Kafka instance's topics table.
179
+
* Verify that the permissions for your service account are listed on the *Access* page of the Kafka instance.
180
+
* Verify that the Kafka topic that you created for {product-connectors} is listed on the *Topics* page of the Kafka instance.
181
181
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your trial cluster.
182
182
183
183
endif::[]
@@ -187,66 +187,62 @@ endif::[]
187
187
== Creating a {connectors} instance for a data source
188
188
189
189
[role="_abstract"]
190
-
A *source* connector consumes events from an external data source and produces Kafka messages.
190
+
A _source_ connector consumes events from an external data source and produces Kafka messages.
191
191
192
-
For this example, you create an instance of the *Data Generator* source connector.
192
+
You configure your {connectors} instance to listen for events from the data source and produce a Kafka message for each event. Your {connectors} instance sends the messages at regular intervals to the Kafka topic that you created for {connectors}.
193
193
194
-
You configure your connector to listen for events from the data source and produce a Kafka message for each event.
195
-
196
-
The connector sends the messages at regular intervals to the Kafka topic that you created for your {connectors} instances.
194
+
For this example, you create an instance of the Data Generator source connector. The Data Generator is provided for development and testing purposes. You specify the text for a message and how often to send the message.
197
195
198
196
ifndef::qs[]
199
197
.Prerequisites
200
198
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
201
199
endif::[]
202
200
203
201
.Procedure
204
-
. In the {product-long-connectors} web console, select *{connectors}* and then click *Create {connectors} instance*.
202
+
. In the {product-long-connectors} web console, click *Create a {connectors} instance*.
205
203
. Select the connector that you want to use for connecting to a data source.
206
204
+
207
205
You can browse through the catalog of available connectors. You can also search for a particular connector by name, and filter for sink or source connectors.
208
206
+
209
-
For example, to find the *Data Generator* source connector, type *data* in the search box. The list filters to show only the *Data Generator Connector* card.
207
+
For example, to find the Data Generator source connector, type `data` in the search box. The list is filtered to show only the *Data Generator source* card.
210
208
+
211
209
Click the card to select the connector, and then click *Next*.
212
210
213
-
. For *Kafka instance*, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
211
+
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
214
212
215
213
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
216
214
+
217
215
If you're using a trial cluster in your own {osd-name-short} environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
218
216
+
219
-
If you're using the evaluation {osd-name-short} environment, click *Register eval namespace* to provision a namespace for hosting the {connectors} instances that you create.
217
+
If you're using the hosted preview environment, click *Create preview namespace* to provision a namespace for hosting the {connectors} instances that you create.
220
218
221
219
. Click *Next*.
222
220
223
-
. Configure the core configuration for your {connectors} instance:
224
-
.. Type a name for your {connectors} instance.
225
-
.. Type the *Client ID* and *Client Secret* of the service account that you created for {connectors} and then click *Next*.
226
-
. Provide connector-specific configuration. For the *Data Generator*, provide the following information:
227
-
.. *Data shape Format*: Accept the default, `application/octet-stream`.
228
-
.. *Topic Names*: Type the name of the topic that you created for {connectors}. For example, type *test-topic*.
221
+
. Specify the core configuration for your {connectors} instance:
222
+
.. Type a name for your {connectors} instance. For example, type `hello world generator`.
223
+
.. In the *Client ID* and *Client Secret* fields, type the credentials for the service account that you created for {connectors} and then click *Next*.
224
+
. Provide connector-specific configuration. For the Data Generator, provide the following information:
225
+
.. *Topic Name*: Type the name of the Kafka topic that you created for {connectors}. For example, type `test-topic`.
229
226
.. *Content Type*: Accept the default, `text/plain`.
230
-
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!`.
231
-
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, specify `10000`, to send a message every 10 seconds.
227
+
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!!`.
228
+
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, to send a message every 10 seconds, specify `10000`.
229
+
.. *Data Shape Produces Format*: Accept the default, `application/octet-stream`.
232
230
233
-
. Optionally, configure the error handling policy for your {connectors} instance.
234
-
+
235
-
The options are:
236
-
+
237
-
* *stop*: (the default) The {connectors} instance shuts down when it encounters an error.
238
-
* *log*: The {connectors} instance sends errors to its log.
239
-
* *dead letter queue*: The {connectors} instance sends messages that it cannot handle to a dead letter topic that you define for the {connectors} Kafka instance.
231
+
. Click *Next*.
232
+
233
+
. Select one of the following error handling policy for your {connectors} instance:
240
234
+
241
-
For example, accept the default *stop* option.
235
+
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to *Failed* state. You can view the error message.
236
+
* *Ignore*: If a message fails to send, the {connectors} instance ignores the error and continues to run. No error message is logged.
237
+
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you specify.
242
238
243
239
. Click *Next*.
244
240
245
241
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
246
242
+
247
-
Your {connectors} instance is listed in the table of {connectors}. After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state and it starts producing messages and sending them to its associated Kafka topic.
243
+
Your {connectors} instance is listed on the *{connectors} Instances* page. After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state and it starts producing messages and sending them to its associated Kafka topic.
248
244
+
249
-
From the {connectors} table, you can stop, start, and delete your {connectors} instance, as well as edit its configuration, by clicking the options icon (three vertical dots).
245
+
From the *{connectors} Instances* page, you can stop, start, duplicate, and delete your {connectors} instance, as well as edit its configuration, by clicking the options icon (three vertical dots).
250
246
251
247
.Verification
252
248
ifdef::qs[]
@@ -259,60 +255,63 @@ endif::[]
259
255
.. In the {product-long-rhoas} web console, select *Streams for Apache Kafka* > *Kafka Instances*.
260
256
.. Click the Kafka instance that you created for connectors.
261
257
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance.
262
-
.. Click the *Messages* tab to see a list of `Hello World!` messages.
258
+
.. Click the *Messages* tab to see a list of `Hello World!!` messages.
263
259
264
260
265
261
[id="proc-creating-sink-connector_{context}"]
266
262
== Creating a {connectors} instance for a data sink
267
263
268
264
[role="_abstract"]
269
-
A *sink* connector consumes messages from a Kafka topic and sends them to an external system.
265
+
A _sink_ connector consumes messages from a Kafka topic and sends them to an external system.
270
266
271
-
For this example, you use the *HTTP Sink* connector which consumes the Kafka messages (produced by the source {connectors} instance) and sends the messages to an HTTP endpoint.
267
+
For this example, you use the *HTTP Sink* connector which consumes the Kafka messages (produced by your Data Generator source {connectors} instance) and sends the messages to an HTTP endpoint.
272
268
273
-
ifndef::qs[]
274
269
.Prerequisites
270
+
271
+
ifndef::qs[]
275
272
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
276
-
* You created the source {connectors} instance as described in _Creating a {connectors} instance for a data source_.
277
-
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
278
273
endif::[]
274
+
* You created a Data Generator source {connectors} instance.
275
+
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
276
+
279
277
280
278
.Procedure
281
279
282
280
. In the {product-long-connectors} web console, click *Create {connectors} instance*.
283
281
284
282
. Select the sink connector that you want to use:
285
-
.. For example, type *http* in the search field. The list of {connectors} filters to show the *HTTP Sink* connector.
286
-
.. Click the *HTTP Sink connector* card and then click *Next*.
283
+
.. For example, type `http` in the search field. The list of {connectors} is filtered to show the *HTTP sink* connector.
284
+
.. Click the *HTTP sink* card and then click *Next*.
287
285
288
-
. Select the {product-kafka} instance for the connector to work with.
286
+
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with.
289
287
+
290
288
For example, select *test* and then click *Next*.
291
289
292
290
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
293
291
+
294
292
If you're using a trial cluster on your own {osd-name-short} environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
295
293
+
296
-
If you're using the evaluation {osd-name-short} environment, click the *eval namespace* that you created when you created the source connector.
294
+
If you're using the hosted preview environment, click the *preview namespace* that you provisioned when you created the source connector.
297
295
298
296
. Click *Next*.
299
297
300
298
. Provide the core configuration for your connector:
301
-
.. Type a unique name for the connector.
302
-
.. Type the *Client ID* and *Client Secret* of the service account that you created for {connectors} and then click *Next*.
299
+
.. Type a unique name for the connector. For example, type `hello world receiver`.
300
+
.. In the *Client ID* and *Client Secret* fields, type the credentials for the service account that you created for {connectors} and then click *Next*.
303
301
304
-
. Provide the connector-specific configuration for your {connectors} instance. For the *HTTP sink connector*, provide the following information:
305
-
306
-
.. *Data shape Format*: Accept the default, `application/octet-stream`.
302
+
. Provide the connector-specific configuration for your HTTP sink {connectors} instance:
303
+
.. *Topic Names*: Type the name of the topic that you used for the source {connectors} instance. For example, type `test-topic`.
307
304
.. *Method*: Accept the default, `POST`.
308
305
.. *URL*: Type your unique URL from the link:https://webhook.site[webhook.site^].
309
-
.. *Topic Names*: Type the name of the topic that you used for the source {connectors} instance. For example, type *test-topic*.
306
+
.. *Data Shape Consumes Format*: Accept the default, `application/octet-stream`.
307
+
308
+
. Click *Next*.
310
309
311
-
. Optionally, configure the error handling policy for your {connectors} instance. For example, select *log* and then click *Next*.
310
+
. Select an error handling policy for your {connectors} instance. For example, select *Stop* and then click *Next*.
312
311
313
312
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
314
313
+
315
-
Your {connectors} instance is listed in the table of {connectors}.
314
+
Your {connectors} instance is added to the *{connectors} Instances* page.
316
315
+
317
316
After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state. It consumes messages from the associated Kafka topic and sends them to the data sink (for this example, the data sink is the HTTP URL that you provided).
0 commit comments