Skip to content

Commit ca9308b

Browse files
committed
Bump provider version in examples to 3.2.0-alpha1
1 parent bc67982 commit ca9308b

File tree

21 files changed

+408
-19
lines changed

21 files changed

+408
-19
lines changed

docs/resources/clickpipe.md

Lines changed: 312 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,312 @@
1+
---
2+
# generated by https://github.com/hashicorp/terraform-plugin-docs
3+
page_title: "clickhouse_clickpipe Resource - clickhouse"
4+
subcategory: ""
5+
description: |-
6+
This experimental resource allows you to create and manage ClickPipes data ingestion in ClickHouse Cloud.
7+
Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.
8+
Known limitations:
9+
ClickPipe does not support table updates for managed tables. If you need to update the table schema, you will have to do that externally.
10+
---
11+
12+
# clickhouse_clickpipe (Resource)
13+
14+
This experimental resource allows you to create and manage ClickPipes data ingestion in ClickHouse Cloud.
15+
16+
**Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.**
17+
18+
Known limitations:
19+
- ClickPipe does not support table updates for managed tables. If you need to update the table schema, you will have to do that externally.
20+
21+
## Example Usage
22+
23+
```terraform
24+
resource "clickhouse_clickpipe" "kafka_clickpipe" {
25+
name = "My Kafka ClickPipe"
26+
description = "Data pipeline from Kafka to ClickHouse"
27+
28+
service_id = "e9465b4b-f7e5-4937-8e21-8d508b02843d"
29+
30+
scaling {
31+
replicas = 2
32+
replica_cpu_millicores = 250
33+
replica_memory_gb = 1.0
34+
}
35+
36+
state = "Running"
37+
38+
source {
39+
kafka {
40+
type = "confluent"
41+
format = "JSONEachRow"
42+
brokers = "my-kafka-broker:9092"
43+
topics = "my_topic"
44+
45+
consumer_group = "clickpipe-test"
46+
47+
credentials {
48+
username = "user"
49+
password = "***"
50+
}
51+
}
52+
}
53+
54+
destination {
55+
table = "my_table"
56+
managed_table = true
57+
58+
tableDefinition {
59+
engine {
60+
type = "MergeTree"
61+
}
62+
}
63+
64+
columns {
65+
name = "my_field1"
66+
type = "String"
67+
}
68+
69+
columns {
70+
name = "my_field2"
71+
type = "UInt64"
72+
}
73+
}
74+
75+
field_mappings = [
76+
{
77+
source_field = "my_field"
78+
destination_field = "my_field1"
79+
}
80+
]
81+
}
82+
```
83+
84+
<!-- schema generated by tfplugindocs -->
85+
## Schema
86+
87+
### Required
88+
89+
- `destination` (Attributes) The destination for the ClickPipe. (see [below for nested schema](#nestedatt--destination))
90+
- `name` (String) The name of the ClickPipe.
91+
- `service_id` (String) The ID of the service to which the ClickPipe belongs.
92+
- `source` (Attributes) The data source for the ClickPipe. At least one source configuration must be provided. (see [below for nested schema](#nestedatt--source))
93+
94+
### Optional
95+
96+
- `field_mappings` (Attributes List) Field mapping between source and destination table. (see [below for nested schema](#nestedatt--field_mappings))
97+
- `scaling` (Attributes) (see [below for nested schema](#nestedatt--scaling))
98+
- `state` (String) The desired state of the ClickPipe. (`Running`, `Stopped`). Default is `Running`.
99+
100+
### Read-Only
101+
102+
- `id` (String) The ID of the ClickPipe. Generated by the ClickHouse Cloud.
103+
104+
<a id="nestedatt--destination"></a>
105+
### Nested Schema for `destination`
106+
107+
Required:
108+
109+
- `columns` (Attributes List) The list of columns for the ClickHouse table. (see [below for nested schema](#nestedatt--destination--columns))
110+
- `table` (String) The name of the ClickHouse table.
111+
112+
Optional:
113+
114+
- `database` (String) The name of the ClickHouse database. Default is `default`.
115+
- `managed_table` (Boolean) Whether the table is managed by ClickHouse Cloud. If `false`, the table must exist in the database. Default is `true`.
116+
- `roles` (List of String) ClickPipe will create a ClickHouse user with these roles. Add your custom roles here if required.
117+
- `table_definition` (Attributes) Definition of the destination table. Required for ClickPipes managed tables. (see [below for nested schema](#nestedatt--destination--table_definition))
118+
119+
<a id="nestedatt--destination--columns"></a>
120+
### Nested Schema for `destination.columns`
121+
122+
Required:
123+
124+
- `name` (String) The name of the column.
125+
- `type` (String) The type of the column.
126+
127+
128+
<a id="nestedatt--destination--table_definition"></a>
129+
### Nested Schema for `destination.table_definition`
130+
131+
Required:
132+
133+
- `engine` (Attributes) The engine of the ClickHouse table. (see [below for nested schema](#nestedatt--destination--table_definition--engine))
134+
135+
Optional:
136+
137+
- `partition_by` (String) The column to partition the table by.
138+
- `primary_key` (String) The primary key of the table.
139+
- `sorting_key` (List of String) The list of columns for the sorting key.
140+
141+
<a id="nestedatt--destination--table_definition--engine"></a>
142+
### Nested Schema for `destination.table_definition.engine`
143+
144+
Required:
145+
146+
- `type` (String) The type of the engine. Only `MergeTree` is supported.
147+
148+
149+
150+
151+
<a id="nestedatt--source"></a>
152+
### Nested Schema for `source`
153+
154+
Optional:
155+
156+
- `kafka` (Attributes) The Kafka source configuration for the ClickPipe. (see [below for nested schema](#nestedatt--source--kafka))
157+
- `kinesis` (Attributes) The Kinesis source configuration for the ClickPipe. (see [below for nested schema](#nestedatt--source--kinesis))
158+
- `object_storage` (Attributes) The compatible object storage source configuration for the ClickPipe. (see [below for nested schema](#nestedatt--source--object_storage))
159+
160+
<a id="nestedatt--source--kafka"></a>
161+
### Nested Schema for `source.kafka`
162+
163+
Required:
164+
165+
- `brokers` (String) The list of Kafka bootstrap brokers. (comma separated)
166+
- `format` (String) The format of the Kafka source. (`JSONEachRow`, `Avro`, `AvroConfluent`)
167+
- `topics` (String) The list of Kafka topics. (comma separated)
168+
169+
Optional:
170+
171+
- `authentication` (String) The authentication method for the Kafka source. (`PLAIN`, `SCRAM-SHA-256`, `SCRAM-SHA-512`, `IAM_ROLE`, `IAM_USER`). Default is `PLAIN`.
172+
- `ca_certificate` (String) PEM encoded CA certificates to validate the broker's certificate.
173+
- `consumer_group` (String) Consumer group of the Kafka source. If not provided `clickpipes-<ID>` will be used.
174+
- `credentials` (Attributes) The credentials for the Kafka source. (see [below for nested schema](#nestedatt--source--kafka--credentials))
175+
- `iam_role` (String) The IAM role for the Kafka source. Use with `IAM_ROLE` authentication. It can be used with AWS ClickHouse service only. Read more in [ClickPipes documentation page](https://clickhouse.com/docs/en/integrations/clickpipes/kafka#iam)
176+
- `offset` (Attributes) The Kafka offset. (see [below for nested schema](#nestedatt--source--kafka--offset))
177+
- `reverse_private_endpoint_ids` (List of String) The list of reverse private endpoint IDs for the Kafka source. (comma separated)
178+
- `schema_registry` (Attributes) The schema registry for the Kafka source. (see [below for nested schema](#nestedatt--source--kafka--schema_registry))
179+
- `type` (String) The type of the Kafka source. (`kafka`, `redpanda`, `confluent`, `msk`, `warpstream`, `azureeventhub`). Default is `kafka`.
180+
181+
<a id="nestedatt--source--kafka--credentials"></a>
182+
### Nested Schema for `source.kafka.credentials`
183+
184+
Optional:
185+
186+
- `access_key_id` (String, Sensitive) The access key ID for the Kafka source. Use with `IAM_USER` authentication.
187+
- `connection_string` (String, Sensitive) The connection string for the Kafka source. Use with `azureeventhub` Kafka source type. Use with `PLAIN` authentication.
188+
- `password` (String, Sensitive) The password for the Kafka source.
189+
- `secret_key` (String, Sensitive) The secret key for the Kafka source. Use with `IAM_USER` authentication.
190+
- `username` (String, Sensitive) The username for the Kafka source.
191+
192+
193+
<a id="nestedatt--source--kafka--offset"></a>
194+
### Nested Schema for `source.kafka.offset`
195+
196+
Required:
197+
198+
- `strategy` (String) The offset strategy for the Kafka source. (`from_beginning`, `from_latest`, `from_timestamp`)
199+
200+
Optional:
201+
202+
- `timestamp` (String) The timestamp for the Kafka offset. Use with `from_timestamp` offset strategy. (format `2021-01-01T00:00`)
203+
204+
205+
<a id="nestedatt--source--kafka--schema_registry"></a>
206+
### Nested Schema for `source.kafka.schema_registry`
207+
208+
Required:
209+
210+
- `authentication` (String) The authentication method for the Schema Registry. Only supported is `PLAIN`.
211+
- `credentials` (Attributes) The credentials for the Schema Registry. (see [below for nested schema](#nestedatt--source--kafka--schema_registry--credentials))
212+
- `url` (String) The URL of the schema registry.
213+
214+
<a id="nestedatt--source--kafka--schema_registry--credentials"></a>
215+
### Nested Schema for `source.kafka.schema_registry.credentials`
216+
217+
Required:
218+
219+
- `password` (String, Sensitive) The password for the Schema Registry.
220+
- `username` (String, Sensitive) The username for the Schema Registry.
221+
222+
223+
224+
225+
<a id="nestedatt--source--kinesis"></a>
226+
### Nested Schema for `source.kinesis`
227+
228+
Required:
229+
230+
- `authentication` (String) The authentication method for the Kinesis source. (`IAM_ROLE`, `IAM_USER`).
231+
- `format` (String) The format of the Kinesis source. (`JSONEachRow`, `Avro`, `AvroConfluent`)
232+
- `iterator_type` (String) The iterator type for the Kinesis source. (`TRIM_HORIZON`, `LATEST`, `AT_TIMESTAMP`)
233+
- `region` (String) The AWS region of the Kinesis stream.
234+
- `stream_name` (String) The name of the Kinesis stream.
235+
236+
Optional:
237+
238+
- `access_key` (Attributes) The access key for the Kinesis source. Use with `IAM_USER` authentication. (see [below for nested schema](#nestedatt--source--kinesis--access_key))
239+
- `iam_role` (String) The IAM role for the Kinesis source. Use with `IAM_ROLE` authentication. It can be used with AWS ClickHouse service only. Read more in [ClickPipes documentation page](https://clickhouse.com/docs/en/integrations/clickpipes/kinesis).
240+
- `timestamp` (String) The timestamp for the Kinesis source. Use with `AT_TIMESTAMP` iterator type. (format `2021-01-01T00:00`)
241+
- `use_enhanced_fan_out` (Boolean) Whether to use enhanced fan-out consumer.
242+
243+
<a id="nestedatt--source--kinesis--access_key"></a>
244+
### Nested Schema for `source.kinesis.access_key`
245+
246+
Required:
247+
248+
- `access_key_id` (String, Sensitive) The access key ID for the Kinesis source.
249+
- `secret_key` (String, Sensitive) The secret key for the Kinesis source.
250+
251+
252+
253+
<a id="nestedatt--source--object_storage"></a>
254+
### Nested Schema for `source.object_storage`
255+
256+
Required:
257+
258+
- `format` (String) The format of the S3 objects. (`JSONEachRow`, `CSV`, `CSVWithNames`, `Parquet`)
259+
260+
Optional:
261+
262+
- `access_key` (Attributes) Access key (see [below for nested schema](#nestedatt--source--object_storage--access_key))
263+
- `authentication` (String) CONNECTION_STRING is for Azure Blob Storage. IAM_ROLE and IAM_USER are for AWS S3/GCS/DigitalOcean. If not provided, no authentication is used
264+
- `azure_container_name` (String) Container name for Azure Blob Storage. Required when type is azureblobstorage. Example: `mycontainer`
265+
- `compression` (String) Compression algorithm used for the files.. (`auto`, `gzip`, `brotli`, `br`, `xz`, `LZMA`, `zstd`)
266+
- `connection_string` (String, Sensitive) Connection string for Azure Blob Storage authentication. Required when authentication is CONNECTION_STRING. Example: `DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;EndpointSuffix=core.windows.net`
267+
- `delimiter` (String) The delimiter for the S3 source. Default is `,`.
268+
- `iam_role` (String) The IAM role for the S3 source. Use with `IAM_ROLE` authentication. It can be used with AWS ClickHouse service only. Read more in [ClickPipes documentation page](https://clickhouse.com/docs/en/integrations/clickpipes/object-storage#authentication)
269+
- `is_continuous` (Boolean) If set to true, the pipe will continuously read new files from the source. If set to false, the pipe will read the files only once. New files have to be uploaded lexically order.
270+
- `path` (String) Path to the file(s) within the Azure container. Used for Azure Blob Storage sources. You can specify multiple files using bash-like wildcards. For more information, see the documentation on using wildcards in path: https://clickhouse.com/docs/en/integrations/clickpipes/object-storage#limitations. Example: `data/logs/*.json`
271+
- `type` (String) The type of the S3-compatbile source (`s3`, `gcs`, `azureblobstorage`). Default is `s3`.
272+
- `url` (String) The URL of the S3/GCS bucket. Required for S3 and GCS types. Not used for Azure Blob Storage (use path and azure_container_name instead). You can specify multiple files using bash-like wildcards. For more information, see the documentation on using wildcards in path: https://clickhouse.com/docs/en/integrations/clickpipes/object-storage#limitations
273+
274+
<a id="nestedatt--source--object_storage--access_key"></a>
275+
### Nested Schema for `source.object_storage.access_key`
276+
277+
Optional:
278+
279+
- `access_key_id` (String, Sensitive) The access key ID for the S3 source. Use with `IAM_USER` authentication.
280+
- `secret_key` (String, Sensitive) The secret key for the S3 source. Use with `IAM_USER` authentication.
281+
282+
283+
284+
285+
<a id="nestedatt--field_mappings"></a>
286+
### Nested Schema for `field_mappings`
287+
288+
Required:
289+
290+
- `destination_field` (String) The name of the column in destination table.
291+
- `source_field` (String) The name of the source field.
292+
293+
294+
<a id="nestedatt--scaling"></a>
295+
### Nested Schema for `scaling`
296+
297+
Optional:
298+
299+
- `replica_cpu_millicores` (Number) The CPU allocation per replica in millicores. Must be between 125 and 2000.
300+
- `replica_memory_gb` (Number) The memory allocation per replica in GB. Must be between 0.5 and 8.0.
301+
- `replicas` (Number) The number of desired replicas for the ClickPipe. Default is 1. The maximum value is 10.
302+
303+
## Import
304+
305+
Import is supported using the following syntax:
306+
307+
The [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import) can be used, for example:
308+
309+
```shell
310+
# ClickPipes can be imported by specifying both service ID and clickpipe ID.
311+
terraform import clickhouse_clickpipe.example xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
312+
```
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
---
2+
# generated by https://github.com/hashicorp/terraform-plugin-docs
3+
page_title: "clickhouse_clickpipes_reverse_private_endpoint Resource - clickhouse"
4+
subcategory: ""
5+
description: |-
6+
This experimental resource allows you to create and manage ClickPipes reverse private endpoints for a secure data source connections in ClickHouse Cloud.
7+
Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.
8+
---
9+
10+
# clickhouse_clickpipes_reverse_private_endpoint (Resource)
11+
12+
This experimental resource allows you to create and manage ClickPipes reverse private endpoints for a secure data source connections in ClickHouse Cloud.
13+
14+
**Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.**
15+
16+
## Example Usage
17+
18+
```terraform
19+
resource "clickhouse_clickpipes_reverse_private_endpoint" "vpc_endpoint_service" {
20+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
21+
description = "VPC_ENDPOINT_SERVICE reverse private endpoint for ClickPipes"
22+
type = "VPC_ENDPOINT_SERVICE"
23+
vpc_endpoint_service_name = "com.amazonaws.vpce.eu-west-1.vpce-svc-080826a65b5b27d4e"
24+
}
25+
26+
resource "clickhouse_clickpipes_reverse_private_endpoint" "vpc_resource" {
27+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
28+
description = "VPC_RESOURCE reverse private endpoint for ClickPipes"
29+
type = "VPC_RESOURCE"
30+
vpc_resource_configuration_id = "rcfg-1a2b3c4d5e6f7g8h9"
31+
vpc_resource_share_arn = "arn:aws:ram:us-east-1:123456789012:resource-share/1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6"
32+
}
33+
34+
resource "clickhouse_clickpipes_reverse_private_endpoint" "msk_multi_vpc" {
35+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
36+
description = "MSK_MULTI_VPC reverse private endpoint for ClickPipes"
37+
type = "MSK_MULTI_VPC"
38+
msk_cluster_arn = "arn:aws:kafka:us-east-1:123456789012:cluster/ClickHouse-Cluster/1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6-1"
39+
msk_authentication = "SASL_IAM"
40+
}
41+
```
42+
43+
<!-- schema generated by tfplugindocs -->
44+
## Schema
45+
46+
### Required
47+
48+
- `description` (String) Description of the reverse private endpoint
49+
- `service_id` (String) The ID of the ClickHouse service to associate with this reverse private endpoint
50+
- `type` (String) Type of the reverse private endpoint (VPC_ENDPOINT_SERVICE, VPC_RESOURCE, or MSK_MULTI_VPC)
51+
52+
### Optional
53+
54+
- `msk_authentication` (String) MSK cluster authentication type (SASL_IAM or SASL_SCRAM), required for MSK_MULTI_VPC type
55+
- `msk_cluster_arn` (String) MSK cluster ARN, required for MSK_MULTI_VPC type
56+
- `vpc_endpoint_service_name` (String) VPC endpoint service name, required for VPC_ENDPOINT_SERVICE type
57+
- `vpc_resource_configuration_id` (String) VPC resource configuration ID, required for VPC_RESOURCE type
58+
- `vpc_resource_share_arn` (String) VPC resource share ARN, required for VPC_RESOURCE type
59+
60+
### Read-Only
61+
62+
- `dns_names` (List of String) Reverse private endpoint internal DNS names
63+
- `endpoint_id` (String) Reverse private endpoint endpoint ID
64+
- `id` (String) Unique identifier for the reverse private endpoint
65+
- `private_dns_names` (List of String) Reverse private endpoint private DNS names
66+
- `status` (String) Status of the reverse private endpoint
67+
68+
## Import
69+
70+
Import is supported using the following syntax:
71+
72+
The [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import) can be used, for example:
73+
74+
```shell
75+
# ClickPipes reverse private endpoints can be imported by specifying both service ID and clickpipe ID.
76+
terraform import clickhouse_clickpipes_reverse_private_endpoint.example xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
77+
```

examples/clickpipe/externally_managed_table/provider.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
terraform {
33
required_providers {
44
clickhouse = {
5-
version = "3.4.1-alpha1"
5+
version = "3.2.0-alpha1"
66
source = "ClickHouse/clickhouse"
77
}
88
}

0 commit comments

Comments
 (0)