Skip to content

Commit 97b6bed

Browse files
authored
jc-1721-a connectors gs cleanup (#616)
* jc-1721-a connectors gs cleanup * jc-1731-a peer review * jc-1731-a peer review 2 * jc-1731-a peer review 3
1 parent 9158a57 commit 97b6bed

File tree

1 file changed

+62
-63
lines changed

1 file changed

+62
-63
lines changed

docs/connectors/getting-started-connectors/README.adoc

Lines changed: 62 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -107,9 +107,9 @@ Welcome to the quick start for {product-long-connectors}.
107107
108108
In this quick start, you learn how to create a source connector and sink connector and send data to and from {product-kafka}.
109109
110-
A *source* connector allows you to send data from an external system to {product-kafka}.
110+
A _source_ connector allows you to send data from an external system to {product-kafka}.
111111
112-
A *sink* connector allows you to send data from {product-kafka} to an external system.
112+
A _sink_ connector allows you to send data from {product-kafka} to an external system.
113113
====
114114
endif::[]
115115

@@ -129,55 +129,55 @@ Before you use {product-connectors}, you must complete the following prerequisit
129129

130130
For Service Preview, you have two choices:
131131

132-
* *The hosted evaluation environment*
132+
* *The hosted preview environment*
133133

134134
** The {connectors} instances are hosted on a multitenant {osd-name-short} cluster that is owned by Red Hat.
135135
** You can create four {connectors} instances at a time.
136-
** The evaluation environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview evaluation guidelines^].
136+
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview guidelines^].
137137

138138
* *Your own trial environment*
139139

140140
** You have access to your own {osd-name-short} trial environment.
141141
** You can create an unlimited number of {connectors} instances.
142142
** Your {osd-name-short} trial cluster expires after 60 days.
143-
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
143+
** A cluster administrator has installed the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
144144

145145
*Configuring {product-long-kafka} for use with {product-connectors}*
146146

147147
ifndef::qs[]
148-
Complete the steps in _{base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]_ to set up the following components:
148+
Complete the steps in {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^] to set up the following components:
149149
endif::[]
150150

151151
ifdef::qs[]
152152
Complete the steps in the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
153153
endif::[]
154154

155-
* A *Kafka instance* that you can use for {product-connectors}.
156-
* A *Kafka topic* to store messages sent by data sources and make the messages available to data sinks.
157-
* A *service account* that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158-
* *Access rules* for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
155+
* A _Kafka instance_ that you can use for {product-connectors}.
156+
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks.
157+
* A _service account_ that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158+
* _Access rules_ for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
159159

160160
ifdef::qs[]
161161
.Procedure
162162
Make sure that you have set up the prerequisite components.
163163

164164
.Verification
165-
* Is the Kafka instance listed in the Kafka instances table and is the Kafka instance in the *Ready* state?
166-
* Is your service account created in the *Service Accounts* page?
165+
* Is the Kafka instance listed on the *Kafka Instances* page and is the Kafka instance in the *Ready* state?
166+
* Is your service account created on the *Service Accounts* page?
167167
* Did you save your service account credentials to a secure location?
168-
* Are the permissions for your service account listed in the *Access* page of the Kafka instance?
169-
* Is the Kafka topic that you created for {connectors} listed in the topics table of the Kafka instance?
168+
* Are the permissions for your service account listed on the *Access* page of the Kafka instance?
169+
* Is the Kafka topic that you created for {connectors} listed on the *Topics* page of the Kafka instance?
170170
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your trial cluster?
171171

172172
endif::[]
173173

174174
ifndef::qs[]
175175
.Verification
176-
* Verify that the Kafka instance is listed in the Kafka instances table and that the state of the Kafka instance is shown as *Ready*.
177-
* Verify that your service account was successfully created in the *Service Accounts* page.
176+
* Verify that the Kafka instance is listed on the *Kafka Instances* page and that the state of the Kafka instance is shown as *Ready*.
177+
* Verify that your service account was successfully created on the *Service Accounts* page.
178178
* Verify that you saved your service account credentials to a secure location.
179-
* Verify that the permissions for your service account are listed in the *Access* page of the Kafka instance.
180-
* Verify that the Kafka topic that you created for {product-connectors} is listed in the Kafka instance's topics table.
179+
* Verify that the permissions for your service account are listed on the *Access* page of the Kafka instance.
180+
* Verify that the Kafka topic that you created for {product-connectors} is listed on the *Topics* page of the Kafka instance.
181181
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your trial cluster.
182182

183183
endif::[]
@@ -187,66 +187,62 @@ endif::[]
187187
== Creating a {connectors} instance for a data source
188188

189189
[role="_abstract"]
190-
A *source* connector consumes events from an external data source and produces Kafka messages.
190+
A _source_ connector consumes events from an external data source and produces Kafka messages.
191191

192-
For this example, you create an instance of the *Data Generator* source connector.
192+
You configure your {connectors} instance to listen for events from the data source and produce a Kafka message for each event. Your {connectors} instance sends the messages at regular intervals to the Kafka topic that you created for {connectors}.
193193

194-
You configure your connector to listen for events from the data source and produce a Kafka message for each event.
195-
196-
The connector sends the messages at regular intervals to the Kafka topic that you created for your {connectors} instances.
194+
For this example, you create an instance of the Data Generator source connector. The Data Generator is provided for development and testing purposes. You specify the text for a message and how often to send the message.
197195

198196
ifndef::qs[]
199197
.Prerequisites
200198
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
201199
endif::[]
202200

203201
.Procedure
204-
. In the {product-long-connectors} web console, select *{connectors}* and then click *Create {connectors} instance*.
202+
. In the {product-long-connectors} web console, click *Create a {connectors} instance*.
205203
. Select the connector that you want to use for connecting to a data source.
206204
+
207205
You can browse through the catalog of available connectors. You can also search for a particular connector by name, and filter for sink or source connectors.
208206
+
209-
For example, to find the *Data Generator* source connector, type *data* in the search box. The list filters to show only the *Data Generator Connector* card.
207+
For example, to find the Data Generator source connector, type `data` in the search box. The list is filtered to show only the *Data Generator source* card.
210208
+
211209
Click the card to select the connector, and then click *Next*.
212210

213-
. For *Kafka instance*, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
211+
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
214212

215213
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
216214
+
217215
If you're using a trial cluster in your own {osd-name-short} environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
218216
+
219-
If you're using the evaluation {osd-name-short} environment, click *Register eval namespace* to provision a namespace for hosting the {connectors} instances that you create.
217+
If you're using the hosted preview environment, click *Create preview namespace* to provision a namespace for hosting the {connectors} instances that you create.
220218

221219
. Click *Next*.
222220

223-
. Configure the core configuration for your {connectors} instance:
224-
.. Type a name for your {connectors} instance.
225-
.. Type the *Client ID* and *Client Secret* of the service account that you created for {connectors} and then click *Next*.
226-
. Provide connector-specific configuration. For the *Data Generator*, provide the following information:
227-
.. *Data shape Format*: Accept the default, `application/octet-stream`.
228-
.. *Topic Names*: Type the name of the topic that you created for {connectors}. For example, type *test-topic*.
221+
. Specify the core configuration for your {connectors} instance:
222+
.. Type a name for your {connectors} instance. For example, type `hello world generator`.
223+
.. In the *Client ID* and *Client Secret* fields, type the credentials for the service account that you created for {connectors} and then click *Next*.
224+
. Provide connector-specific configuration. For the Data Generator, provide the following information:
225+
.. *Topic Name*: Type the name of the Kafka topic that you created for {connectors}. For example, type `test-topic`.
229226
.. *Content Type*: Accept the default, `text/plain`.
230-
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!`.
231-
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, specify `10000`, to send a message every 10 seconds.
227+
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!!`.
228+
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, to send a message every 10 seconds, specify `10000`.
229+
.. *Data Shape Produces Format*: Accept the default, `application/octet-stream`.
232230

233-
. Optionally, configure the error handling policy for your {connectors} instance.
234-
+
235-
The options are:
236-
+
237-
* *stop*: (the default) The {connectors} instance shuts down when it encounters an error.
238-
* *log*: The {connectors} instance sends errors to its log.
239-
* *dead letter queue*: The {connectors} instance sends messages that it cannot handle to a dead letter topic that you define for the {connectors} Kafka instance.
231+
. Click *Next*.
232+
233+
. Select one of the following error handling policy for your {connectors} instance:
240234
+
241-
For example, accept the default *stop* option.
235+
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to *Failed* state. You can view the error message.
236+
* *Ignore*: If a message fails to send, the {connectors} instance ignores the error and continues to run. No error message is logged.
237+
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you specify.
242238

243239
. Click *Next*.
244240

245241
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
246242
+
247-
Your {connectors} instance is listed in the table of {connectors}. After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state and it starts producing messages and sending them to its associated Kafka topic.
243+
Your {connectors} instance is listed on the *{connectors} Instances* page. After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state and it starts producing messages and sending them to its associated Kafka topic.
248244
+
249-
From the {connectors} table, you can stop, start, and delete your {connectors} instance, as well as edit its configuration, by clicking the options icon (three vertical dots).
245+
From the *{connectors} Instances* page, you can stop, start, duplicate, and delete your {connectors} instance, as well as edit its configuration, by clicking the options icon (three vertical dots).
250246

251247
.Verification
252248
ifdef::qs[]
@@ -259,60 +255,63 @@ endif::[]
259255
.. In the {product-long-rhoas} web console, select *Streams for Apache Kafka* > *Kafka Instances*.
260256
.. Click the Kafka instance that you created for connectors.
261257
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance.
262-
.. Click the *Messages* tab to see a list of `Hello World!` messages.
258+
.. Click the *Messages* tab to see a list of `Hello World!!` messages.
263259

264260

265261
[id="proc-creating-sink-connector_{context}"]
266262
== Creating a {connectors} instance for a data sink
267263

268264
[role="_abstract"]
269-
A *sink* connector consumes messages from a Kafka topic and sends them to an external system.
265+
A _sink_ connector consumes messages from a Kafka topic and sends them to an external system.
270266

271-
For this example, you use the *HTTP Sink* connector which consumes the Kafka messages (produced by the source {connectors} instance) and sends the messages to an HTTP endpoint.
267+
For this example, you use the *HTTP Sink* connector which consumes the Kafka messages (produced by your Data Generator source {connectors} instance) and sends the messages to an HTTP endpoint.
272268

273-
ifndef::qs[]
274269
.Prerequisites
270+
271+
ifndef::qs[]
275272
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
276-
* You created the source {connectors} instance as described in _Creating a {connectors} instance for a data source_.
277-
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
278273
endif::[]
274+
* You created a Data Generator source {connectors} instance.
275+
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
276+
279277

280278
.Procedure
281279

282280
. In the {product-long-connectors} web console, click *Create {connectors} instance*.
283281

284282
. Select the sink connector that you want to use:
285-
.. For example, type *http* in the search field. The list of {connectors} filters to show the *HTTP Sink* connector.
286-
.. Click the *HTTP Sink connector* card and then click *Next*.
283+
.. For example, type `http` in the search field. The list of {connectors} is filtered to show the *HTTP sink* connector.
284+
.. Click the *HTTP sink* card and then click *Next*.
287285

288-
. Select the {product-kafka} instance for the connector to work with.
286+
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with.
289287
+
290288
For example, select *test* and then click *Next*.
291289

292290
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
293291
+
294292
If you're using a trial cluster on your own {osd-name-short} environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
295293
+
296-
If you're using the evaluation {osd-name-short} environment, click the *eval namespace* that you created when you created the source connector.
294+
If you're using the hosted preview environment, click the *preview namespace* that you provisioned when you created the source connector.
297295

298296
. Click *Next*.
299297

300298
. Provide the core configuration for your connector:
301-
.. Type a unique name for the connector.
302-
.. Type the *Client ID* and *Client Secret* of the service account that you created for {connectors} and then click *Next*.
299+
.. Type a unique name for the connector. For example, type `hello world receiver`.
300+
.. In the *Client ID* and *Client Secret* fields, type the credentials for the service account that you created for {connectors} and then click *Next*.
303301

304-
. Provide the connector-specific configuration for your {connectors} instance. For the *HTTP sink connector*, provide the following information:
305-
306-
.. *Data shape Format*: Accept the default, `application/octet-stream`.
302+
. Provide the connector-specific configuration for your HTTP sink {connectors} instance:
303+
.. *Topic Names*: Type the name of the topic that you used for the source {connectors} instance. For example, type `test-topic`.
307304
.. *Method*: Accept the default, `POST`.
308305
.. *URL*: Type your unique URL from the link:https://webhook.site[webhook.site^].
309-
.. *Topic Names*: Type the name of the topic that you used for the source {connectors} instance. For example, type *test-topic*.
306+
.. *Data Shape Consumes Format*: Accept the default, `application/octet-stream`.
307+
308+
. Click *Next*.
310309

311-
. Optionally, configure the error handling policy for your {connectors} instance. For example, select *log* and then click *Next*.
310+
. Select an error handling policy for your {connectors} instance. For example, select *Stop* and then click *Next*.
312311

313312
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
314313
+
315-
Your {connectors} instance is listed in the table of {connectors}.
314+
Your {connectors} instance is added to the *{connectors} Instances* page.
316315
+
317316
After a couple of seconds, the status of your {connectors} instance changes to the *Ready* state. It consumes messages from the associated Kafka topic and sends them to the data sink (for this example, the data sink is the HTTP URL that you provided).
318317

0 commit comments

Comments
 (0)