You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/develop/pages/connect/configuration/monitor-connect.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
-
= Monitor Data Pipelines on BYOC Clusters
1
+
= Monitor Data Pipelines on BYOC and Dedicated Clusters
2
2
:description: Configure Prometheus monitoring of your data pipelines on BYOC clusters.
3
3
4
-
You can configure monitoring on BYOC clusters to understand the behavior, health, and performance of your data pipelines.
4
+
You can configure monitoring on BYOC and Dedicated clusters to understand the behavior, health, and performance of your data pipelines.
5
5
6
6
Redpanda Connect automatically exports xref:components:metrics/about.adoc[detailed metrics for each component of your data pipeline] to a Prometheus endpoint, along with metrics for all other cluster services. You don’t need to update the configuration of your pipeline.
Copy file name to clipboardExpand all lines: modules/develop/pages/connect/configuration/scale-pipelines.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
= Scale Data Pipeline Resources on BYOC Clusters
1
+
= Scale Data Pipeline Resources on BYOC and Dedicated Clusters
2
2
:description: Learn how to manually scale resources for data pipelines using the Data Plane API.
3
3
4
4
When you create a data pipeline through the Cloud UI, Redpanda Connect reserves compute resources for the exclusive use of that pipeline. This initial resource allocation is enough to experiment with pipelines that create low message volumes.
@@ -16,7 +16,7 @@ Use the Cloud UI or Data Plane API to view resources already allocated to a data
16
16
Cloud UI::
17
17
+
18
18
--
19
-
. Log in to https://cloud.redpanda.com[Redpanda Cloud^].
19
+
. Log in to https://cloud.redpanda.com[Redpanda Cloud^].
20
20
. Go to the cluster where the pipeline is set up.
21
21
. On the **Connectors** page, select your pipeline and look at the value for **Resources**.
22
22
+
@@ -47,7 +47,7 @@ This example allocates 1.2 vCPU and 500 MB of memory to a data pipeline. For `cp
47
47
+
48
48
[,bash]
49
49
----
50
-
curl -X PUT "https://api-dfb1e463.crhjl9gj1v2u1117r1f0.byoc.prd.cloud.redpanda.com/v1alpha2/redpanda-connect/pipelines/xxx..." \
50
+
curl -X PUT "https://<data-plane-api-url>/v1alpha2/redpanda-connect/pipelines/xxx..." \
A Redpanda Cloud account for Serverless or standard BYOC (not customer-managed VPC). If you don't already have an account, https://redpanda.com/try-redpanda/cloud-trial[sign up for a free trial^].
12
+
A Redpanda Cloud account for Serverless, Dedicated or standard BYOC (not customer-managed VPC). If you don't already have an account, https://redpanda.com/try-redpanda/cloud-trial[sign up for a free trial^].
13
13
14
14
== Before you start
15
15
@@ -40,17 +40,26 @@ BYOC::
40
40
+
41
41
Wait while your cluster is created.
42
42
--
43
+
Dedicated::
44
+
+
45
+
--
46
+
. Log in to https://cloud.redpanda.com[Redpanda Cloud^].
47
+
. On the **Clusters** page, click **Create cluster**, then click **Create Dedicated cluster**.
48
+
. On the **Cluster settings** page, enter **connect-quickstart** for the cluster name.
49
+
. Select your cloud provider, then click **Next**.
50
+
. On the **Networking** page, use the default **Public** connection type, and click **Create**.
51
+
+
52
+
Wait while your cluster is created.
53
+
--
43
54
=====
44
55
45
56
To complete your setup:
46
57
47
58
. Go to the **Topics** page, click **Create topic** and enter **processed-emails** for the topic name. Use default values for the remaining properties and click **Create** and then **Close**.
48
-
. Go to the **Security** page, and click **Create user**. Enter the username **connect**. Use the default values for the remaining properties. Remember to take a note of your password.
59
+
. Go to the **Security** page, and click **Create user**. Enter the username **connect** and take a note of the password. You will need to use this later. Use the default values for the remaining properties.
49
60
. Click **Create** and **Done**.
50
61
. Stay on the **Access control** page and click the **ACLs** tab.
51
62
. Select the **connect** user you have just created. Click **Allow all operations** and then scroll down to click **OK**.
52
-
. Finally, go to the **Overview** page and click the **Kafka API** tab.
53
-
. Copy the bootstrap server URL into a text file. You will need it for the next steps.
54
63
55
64
== Build your data pipeline
56
65
@@ -71,8 +80,7 @@ All Redpanda Connect configurations use a YAML file split into three sections:
71
80
| A xref:components:outputs/kafka_franz.adoc[`kafka_franz` output] that writes messages to the **connect-output** topic on your cluster.
72
81
|===
73
82
74
-
. Go to the **Connectors** page on your cluster and click the **Redpanda Connect** tab.
75
-
. Click **Create pipeline**.
83
+
. Go to the **Connect** page on your cluster and click **Create pipeline**.
76
84
. In **Pipeline name**, enter **emailprocessor-pipeline** and add a short description. For example, **Transforms email data using a mutation processor**.
77
85
. In the **Configuration** box, paste the following configuration.
78
86
@@ -96,7 +104,7 @@ pipeline:
96
104
output:
97
105
kafka_franz:
98
106
seed_brokers:
99
-
- <bootstrap-server-url>
107
+
- ${REDPANDA_BROKERS}
100
108
sasl:
101
109
- mechanism: SCRAM-SHA-256
102
110
password: <cluster-password>
@@ -106,12 +114,9 @@ output:
106
114
enabled: true
107
115
----
108
116
109
-
110
117
+
111
-
Replace the following placeholders:
112
-
113
-
* `<bootstrap-server-url>`: The bootstrap server address you copied in <<before-you-start,Before you start>>.
114
-
* `<cluster-password>`: The password of the connect user you set up in <<before-you-start,Before you start>>.
118
+
* Replace `<cluster-password>` with the password of the connect user you set up in <<before-you-start,Before you start>>. To avoid exposing secrets, Redpanda Connect also supports secret variables. For more information, see xref:develop:connect/configuration/secret-management.adoc[Manage Secrets].
119
+
* `$\{REDPANDA_BROKERS}` is a contextual variable that references the bootstrap server address of your cluster. All Redpanda Cloud clusters automatically set this variable to the bootstrap server address so that you can add it to any of your pipelines.
115
120
116
121
+
117
122
NOTE: The Brave browser does not fully support code snippets.
@@ -146,7 +151,7 @@ To see the pipeline output:
146
151
147
152
To view the logs:
148
153
149
-
. Return to the **Connectors** page on your cluster and select the **emailprocessor-pipeline**.
154
+
. Return to the **Connect** page on your cluster and select the **emailprocessor-pipeline**.
150
155
. Click the **Logs** tab and select each of the four log messages. You can see the sequence of events that start the data pipeline. For example, you can see when Redpanda Connect starts to write data to the topic:
151
156
152
157
+
@@ -196,8 +201,7 @@ The snippet includes new configuration to:
196
201
197
202
198
203
. Click **Update**.
199
-
. After a few seconds, click **Stop**.
200
-
. Click the **Logs** tab and select the most recent (final) log message. You can see the custom logging fields along with the uppercase user's name.
204
+
. Once the pipeline has started running, click the **Logs** tab and select the most recent (final) log message. You can see the custom logging fields along with the uppercase user's name.
201
205
202
206
+
203
207
[source,json]
@@ -214,13 +218,14 @@ The snippet includes new configuration to:
214
218
"time": "2024-08-22T17:33:46.676903284Z"
215
219
}
216
220
----
221
+
. Click **Stop**.
217
222
218
223
== Clean up
219
224
220
225
When you've finished experimenting with your data pipeline, you can delete the pipeline, topic, and cluster you created for this quickstart.
221
226
222
-
. On the **Connectors** page, select your pipeline.
223
-
. Click **Delete** and confirm your deletion to remove the data pipeline and associated logs.
227
+
. On the **Connect** page, select the delete icon next to the **emailprocessor-pipeline**.
228
+
. Confirm your deletion to remove the data pipeline and associated logs.
224
229
. On the **Topics** page, delete the **processed-emails** topic.
225
230
. Go back to the **Clusters** page and delete the **connect-quickstart** cluster.
0 commit comments