Skip to content

Commit fc23885

Browse files
authored
jc-1844 gs updates (#620)
* jc-1844 gs update for la * jc-1844 peer review * jc-1844 update attributes * jc-1844 more cleanup * jc-1844 peer review 2 * jc-1844 peer edits 3 * jc-1844 peer edits 4 * jc-1844 ui change namesapce>deployment * jc-1844 update path
1 parent ed06d81 commit fc23885

File tree

29 files changed

+135
-38
lines changed

29 files changed

+135
-38
lines changed

docs/_artifacts/document-attributes.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

docs/api-designer/getting-started-api-designer/README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

docs/connectors/getting-started-connectors/README.adoc

Lines changed: 59 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/
@@ -95,6 +98,12 @@ In this example, you connect a data source (a data generator) that creates Kafka
9598

9699
// Condition out QS-only content so that it doesn't appear in docs.
97100
// All QS anchor IDs must be in this alternate anchor ID format `[#anchor-id]` because the ascii splitter relies on the other format `[id="anchor-id"]` to generate module files.
101+
102+
ifndef::qs[]
103+
.Example flow of messages from a data source to a data sink
104+
image::{imagesdir}/connectors-getting-started-connectors/connectors-example-diagram.png[Image of data flowing from a data source to a data sink]
105+
endif::[]
106+
98107
ifdef::qs[]
99108
[#description]
100109
====
@@ -121,26 +130,32 @@ endif::[]
121130

122131
Before you use {product-connectors}, you must complete the following prerequisites:
123132

124-
* Determine which {openshift} environment to use for deploying your {product-connectors} instances.
133+
* Determine which {openshift} environment to use for your _{connectors} namespace_. The {connectors} namespace is where your {product-connectors} instances are deployed.
125134

126135
* Configure {product-long-kafka} for use with {product-connectors}.
127136

128-
*Determining which {openshift} environment to use for deploying your {connectors} instances*
137+
*Determining which {openshift} environment to use for your {connectors} namespace*
129138

130-
For Service Preview, you have two choices:
139+
You have three choices:
131140

132141
* *The hosted preview environment*
133142

134143
** The {connectors} instances are hosted on a multitenant {osd-name-short} cluster that is owned by Red Hat.
135144
** You can create four {connectors} instances at a time.
136-
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Service Preview guidelines^].
145+
** The preview environment applies 48-hour expiration windows, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/8190dc9e-249c-4207-bd69-096e5dd5bc64[Red Hat {openshift} {connectors} Preview guidelines^].
137146

138-
* *Your own trial environment*
147+
* *Your own {osd-name} Trial environment*
139148

140-
** You have access to your own {osd-name-short} trial environment.
149+
** You have access to your own {osd-name} Trial environment.
141150
** You can create an unlimited number of {connectors} instances.
142-
** Your {osd-name-short} trial cluster expires after 60 days.
143-
** A cluster administrator has installed the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
151+
** Your {osd-name-short} Trial cluster expires after 60 days.
152+
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
153+
154+
* *Your own {rosa-name} cluster*
155+
156+
** You have access to your own {rosa-name} (ROSA) environment.
157+
** You can create {connectors} instances depending on your subscription, as described in https://access.redhat.com/articles/6990631[Red Hat OpenShift Connectors Tiers^].
158+
** A cluster administrator must install the {product-connectors} add-on as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
144159

145160
*Configuring {product-long-kafka} for use with {product-connectors}*
146161

@@ -152,8 +167,8 @@ ifdef::qs[]
152167
Complete the steps in the link:https://console.redhat.com/application-services/learning-resources?quickstart=getting-started[Getting started with {product-long-kafka}] quick start to set up the following components:
153168
endif::[]
154169

155-
* A _Kafka instance_ that you can use for {product-connectors}.
156-
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks.
170+
* A _Kafka instance_ that you can use for {product-connectors}. For this example, the Kafka instance is `test-connect`.
171+
* A _Kafka topic_ to store messages sent by data sources and make the messages available to data sinks. For this example, the Kafka topic is `test-topic`.
157172
* A _service account_ that allows you to connect and authenticate your {connectors} instances with your Kafka instance.
158173
* _Access rules_ for the service account that define how your {connectors} instances can access and use the topics in your Kafka instance.
159174

@@ -167,7 +182,7 @@ Make sure that you have set up the prerequisite components.
167182
* Did you save your service account credentials to a secure location?
168183
* Are the permissions for your service account listed on the *Access* page of the Kafka instance?
169184
* Is the Kafka topic that you created for {connectors} listed on the *Topics* page of the Kafka instance?
170-
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your trial cluster?
185+
* If you plan to use your own {openshift} cluster ({osd-name-short} Trial or ROSA) to deploy your {product-connectors} instances, has a cluster administrator added the {product-connectors} add-on to your Trial cluster?
171186

172187
endif::[]
173188

@@ -178,7 +193,7 @@ ifndef::qs[]
178193
* Verify that you saved your service account credentials to a secure location.
179194
* Verify that the permissions for your service account are listed on the *Access* page of the Kafka instance.
180195
* Verify that the Kafka topic that you created for {product-connectors} is listed on the *Topics* page of the Kafka instance.
181-
* If you plan to use a 60-day {osd-name-short} trial cluster to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your trial cluster.
196+
* If you plan to use your own {openshift} cluster ({osd-name-short} Trial or ROSA) to deploy your {product-connectors} instances, verify that a cluster administrator added the {product-connectors} add-on to your Trial cluster.
182197

183198
endif::[]
184199

@@ -193,8 +208,11 @@ You configure your {connectors} instance to listen for events from the data sour
193208

194209
For this example, you create an instance of the Data Generator source connector. The Data Generator is provided for development and testing purposes. You specify the text for a message and how often to send the message.
195210

196-
ifndef::qs[]
197211
.Prerequisites
212+
213+
* If you want to use a dead letter queue (DLQ) to handle any messaging errors, create a Kafka topic for the DLQ.
214+
215+
ifndef::qs[]
198216
* You're logged in to the {product-long-connectors} web console at {service-url-connectors}[^].
199217
endif::[]
200218

@@ -208,15 +226,17 @@ For example, to find the Data Generator source connector, type `data` in the sea
208226
+
209227
Click the card to select the connector, and then click *Next*.
210228

211-
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}, and then click *Next*.
229+
. On the *Kafka Instance* page, click the card for the {product-kafka} instance that you configured for {connectors}. For example, click the *test-connect* card.
230+
+
231+
Click *Next*.
212232

213-
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
233+
. On the *Deployment* page, the namespace that you select depends on your {openshift} environment.
214234
+
215-
If you're using a trial cluster in your own {osd-name-short} environment, select the card for the namespace that was created when a system administrator added the {connectors} service to your trial cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {osd-name-short} trial cluster^].
235+
If you're using your own {openshift} environment, select the card for the namespace that was created when a cluster administrator added the {connectors} service to your cluster, as described in https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01[Adding the Red Hat {openshift} {connectors} add-on to your {openshift} cluster^].
216236
+
217237
If you're using the hosted preview environment, click *Create preview namespace* to provision a namespace for hosting the {connectors} instances that you create.
218-
219-
. Click *Next*.
238+
+
239+
Click *Next*.
220240

221241
. Specify the core configuration for your {connectors} instance:
222242
.. Type a name for your {connectors} instance. For example, type `hello world generator`.
@@ -227,16 +247,16 @@ If you're using the hosted preview environment, click *Create preview namespace*
227247
.. *Message*: Type the content of the message that you want the {connectors} instance to send to the Kafka topic. For example, type `Hello World!!`.
228248
.. *Period*: Specify the interval (in milliseconds) at which you want the {connectors} instance to send messages to the Kafka topic. For example, to send a message every 10 seconds, specify `10000`.
229249
.. *Data Shape Produces Format*: Accept the default, `application/octet-stream`.
250+
+
251+
Click *Next*.
230252

231-
. Click *Next*.
232-
233-
. Select one of the following error handling policy for your {connectors} instance:
253+
. Select one of the following error handling policies for your {connectors} instance:
234254
+
235-
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to *Failed* state. You can view the error message.
255+
* *Stop*: If a message fails to send, the {connectors} instance stops running and changes its status to the *Failed* state. You can view the error message.
236256
* *Ignore*: If a message fails to send, the {connectors} instance ignores the error and continues to run. No error message is logged.
237-
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you specify.
238-
239-
. Click *Next*.
257+
* *Dead letter queue*: If a message fails to send, the {connectors} instance sends error details to the Kafka topic that you created for the DLQ.
258+
+
259+
Click *Next*.
240260

241261
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
242262
+
@@ -253,8 +273,8 @@ ifndef::qs[]
253273
endif::[]
254274

255275
.. In the {product-long-rhoas} web console, select *Streams for Apache Kafka* > *Kafka Instances*.
256-
.. Click the Kafka instance that you created for connectors.
257-
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance.
276+
.. Click the Kafka instance that you created for connectors. For example, click *test-connect*.
277+
.. Click the *Topics* tab and then click the topic that you specified for your source {connectors} instance. For example, click *test-topic*.
258278
.. Click the *Messages* tab to see a list of `Hello World!!` messages.
259279

260280

@@ -273,7 +293,7 @@ ifndef::qs[]
273293
endif::[]
274294
* You created a Data Generator source {connectors} instance.
275295
* For the data sink example, open the free https://webhook.site[webhook.site^] in a browser window. The `webhook.site` page provides a unique URL that you copy for use as an HTTP data sink.
276-
296+
* If you want to use a dead letter queue (DLQ) to handle any messaging errors, create a Kafka topic for the DLQ.
277297

278298
.Procedure
279299

@@ -283,17 +303,17 @@ endif::[]
283303
.. For example, type `http` in the search field. The list of {connectors} is filtered to show the *HTTP sink* connector.
284304
.. Click the *HTTP sink* card and then click *Next*.
285305

286-
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with.
306+
. On the *Kafka Instance* page, select the {product-kafka} instance for the connector to work with. For example, select *test-connect*.
287307
+
288-
For example, select *test* and then click *Next*.
308+
Click *Next*.
289309

290-
. On the *Namespace* page, the namespace that you select depends on your {osd-name-short} environment. The namespace is the deployment space that hosts your {connectors} instances.
310+
. On the *Deployment* page, the namespace that you select depends on your {openshift} environment.
291311
+
292-
If you're using a trial cluster on your own {osd-name-short} environment, select the card for the namespace that was created when you added the {connectors} service to your trial cluster.
312+
If you're using your own {openshift} environment, select the card for the namespace that was created when a cluster administrator added the {connectors} service to your cluster.
293313
+
294314
If you're using the hosted preview environment, click the *preview namespace* that you provisioned when you created the source connector.
295-
296-
. Click *Next*.
315+
+
316+
Click *Next*.
297317

298318
. Provide the core configuration for your connector:
299319
.. Type a unique name for the connector. For example, type `hello world receiver`.
@@ -304,10 +324,12 @@ If you're using the hosted preview environment, click the *preview namespace* th
304324
.. *Method*: Accept the default, `POST`.
305325
.. *URL*: Type your unique URL from the link:https://webhook.site[webhook.site^].
306326
.. *Data Shape Consumes Format*: Accept the default, `application/octet-stream`.
327+
+
328+
Click *Next*.
307329

308-
. Click *Next*.
309-
310-
. Select an error handling policy for your {connectors} instance. For example, select *Stop* and then click *Next*.
330+
. Select an error handling policy for your {connectors} instance. For example, select *Stop*.
331+
+
332+
Click *Next*.
311333

312334
. Review the summary of the configuration properties and then click *Create {connectors} instance*.
313335
+
Binary file not shown.
157 KB
Loading

docs/connectors/getting-started-connectors/quickstart.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ spec:
1616
description: !snippet README.adoc#description
1717
prerequisites:
1818
- Complete the <a href="https://console.redhat.com/application-services/learning-resources?quickstart=getting-started">Getting started with OpenShift Streams for Apache Kafka</a> quick start.
19-
- If you plan to use a 60-day OpenShift Dedicated trial cluster to deploy your Connectors instances, a cluster administrator must install the Connectors add-on as described in <a href="https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01">Adding the Red Hat OpenShift Connectors add-on to your OpenShift Dedicated trial cluster</a>.
19+
- If you plan to use your own OpenShift cluster to deploy your Connectors instances, a cluster administrator must install the Connectors add-on as described in <a href="https://access.redhat.com/documentation/en-us/openshift_connectors/1/guide/15a79de0-8827-4bf1-b445-8e3b3eef7b01">Adding the Red Hat OpenShift Connectors add-on to your OpenShift cluster</a>.
2020
introduction: !snippet README.adoc#introduction
2121
tasks:
2222
- !snippet/proc README.adoc#proc-verifying-prerequisites-for-connectors

docs/connectors/rhoas-cli-getting-started-connectors/README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

docs/kafka/access-mgmt-kafka/README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

docs/kafka/consumer-configuration-kafka/README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

docs/kafka/getting-started-kafka/README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ WARNING: This content is generated by running npm --prefix .build run generate:a
1717

1818
//OpenShift
1919
:openshift: OpenShift
20+
:osd-name: OpenShift Dedicated
2021
:osd-name-short: OpenShift Dedicated
22+
:rosa-name: OpenShift Service for AWS
23+
:rosa-name-short: OpenShift Service for AWS
2124

2225
//OpenShift Application Services CLI
2326
:base-url-cli: https://github.com/redhat-developer/app-services-cli/tree/main/docs/

0 commit comments

Comments
 (0)