Skip to content

Commit bab3774

Browse files
committed
Merge branch 'main' of github.com:elastic/terraform-provider-elasticstack into copilot/fix-1290-2
2 parents 57a8924 + 789780d commit bab3774

File tree

84 files changed

+3796
-1030
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+3796
-1030
lines changed

.buildkite/release.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
steps:
22
- label: Release
33
agents:
4-
image: "golang:1.25.1@sha256:bb979b278ffb8d31c8b07336fd187ef8fafc8766ebeaece524304483ea137e96"
4+
image: "golang:1.25.1@sha256:8305f5fa8ea63c7b5bc85bd223ccc62941f852318ebfbd22f53bbd0b358c07e1"
55
cpu: "16"
66
memory: "24G"
77
ephemeralStorage: "20G"

.github/copilot-instructions.md

Lines changed: 7 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,9 @@
11
You will be tasked to fix an issue from an open-source repository. This is a Go based repository hosting a Terrform provider for the elastic stack (elasticsearch and kibana) APIs. This repo currently supports both [plugin framework](https://developer.hashicorp.com/terraform/plugin/framework/getting-started/code-walkthrough) and [sdkv2](https://developer.hashicorp.com/terraform/plugin/sdkv2) resources. Unless you're told otherwise, all new resources _must_ use the plugin framework.
22

3-
4-
5-
63
Take your time and think through every step - remember to check your solution rigorously and watch out for boundary cases, especially with the changes you made. Your solution must be perfect. If not, continue working on it. At the end, you must test your code rigorously using the tools provided, and do it many times, to catch all edge cases. If it is not robust, iterate more and make it perfect. Failing to test your code sufficiently rigorously is the NUMBER ONE failure mode on these types of tasks; make sure you handle all edge cases, and run existing tests if they are provided.
74

8-
95
Please see [README.md](../README.md) and the [CONTRIBUTING.md](../CONTRIBUTING.md) docs before getting started.
106

11-
127
# Workflow
138

149
## High-Level Problem Solving Strategy
@@ -57,27 +52,27 @@ Carefully read the issue and think hard about a plan to solve it before coding.
5752
- After each change, verify correctness by running relevant tests.
5853
- If tests fail, analyze failures and revise your patch.
5954
- Write additional tests if needed to capture important behaviors or edge cases.
60-
- Ensure all tests pass before finalizing.
55+
- NEVER accept acceptance tests that have been skipped due to environment issues; always ensure the environment is correctly set up and all tests run successfully.
6156

6257
### 6.1 Acceptance Testing Requirements
63-
When running acceptance tests (`make testacc`), ensure the following:
64-
58+
When running acceptance tests, ensure the following:
6559

6660
- **Environment Variables** - The following environment variables are required for acceptance tests:
6761
- `ELASTICSEARCH_ENDPOINTS` (default: http://localhost:9200)
6862
- `ELASTICSEARCH_USERNAME` (default: elastic)
6963
- `ELASTICSEARCH_PASSWORD` (default: password)
7064
- `KIBANA_ENDPOINT` (default: http://localhost:5601)
7165
- `TF_ACC` (must be set to "1" to enable acceptance tests)
72-
- **Ensure a valid environment if using `go test`** - Check if the required environment variables are set, if not use the defaults specified above.
73-
- **Always finish with `make testacc`** - This will run all tests. Make sure all tests pass before considering a task complete.
74-
- **Pre-set Environment Variables** - Default environment variables are configured in the Makefile. If these defaults are suitable for your testing environment, `make testacc` will work directly without additional setup
75-
- **Docker Environment** - For isolated testing with guaranteed environment setup, use `make docker-testacc` which starts Elasticsearch and Kibana containers automatically
66+
- **Run targeted tests using `go test`** - Ensure the required environment variables are explicitly defined when running targeted tests. Example:
67+
```bash
68+
ELASTICSEARCH_ENDPOINTS=http://localhost:9200 ELASTICSEARCH_USERNAME=elastic ELASTICSEARCH_PASSWORD=password KIBANA_ENDPOINT=http://localhost:5601 TF_ACC=1 go test -v -run TestAccResourceName ./path/to/testfile.go
69+
```
7670

7771
## 7. Final Verification
7872
- Confirm the root cause is fixed.
7973
- Review your solution for logic correctness and robustness.
8074
- Iterate until you are extremely confident the fix is complete and all tests pass.
75+
- Run the acceptance tests for any changed resources. Ensure acceptance tests pass without any environment-related skips. Use `make testacc` to verify this, explicitly defining the required environment variables.
8176
- Run `make lint` to ensure any linting errors have not surfaced with your changes. This task may automatically correct any linting errors, and regenerate documentation. Include any changes in your commit.
8277

8378
## 8. Final Reflection and Additional Testing

.github/workflows/test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ jobs:
4545
terraform_wrapper: false
4646

4747
- name: Lint
48-
run: make lint
48+
run: make check-lint
4949

5050
test:
5151
name: Matrix Acceptance Test

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,13 @@
1414
- Migrate `elasticstack_kibana_action_connector` to the Terraform plugin framework ([#1269](https://github.com/elastic/terraform-provider-elasticstack/pull/1269))
1515
- Migrate `elasticstack_elasticsearch_security_role_mapping` resource and data source to Terraform Plugin Framework ([#1279](https://github.com/elastic/terraform-provider-elasticstack/pull/1279))
1616
- Add support for `inactivity_timeout` in `elasticstack_fleet_agent_policy` ([#641](https://github.com/elastic/terraform-provider-elasticstack/issues/641))
17+
- Add support for `kafka` output types in `elasticstack_fleet_output` ([#1302](https://github.com/elastic/terraform-provider-elasticstack/pull/1302))
1718
- Add support for `prevent_initial_backfill` to `elasticstack_kibana_slo` ([#1071](https://github.com/elastic/terraform-provider-elasticstack/pull/1071))
1819
- [Refactor] Regenerate the SLO client using the current OpenAPI spec ([#1303](https://github.com/elastic/terraform-provider-elasticstack/pull/1303))
1920
- Add support for `data_view_id` in the `elasticstack_kibana_slo` resource ([#1305](https://github.com/elastic/terraform-provider-elasticstack/pull/1305))
2021
- Add support for `unenrollment_timeout` in `elasticstack_fleet_agent_policy` ([#1169](https://github.com/elastic/terraform-provider-elasticstack/issues/1169))
2122
- Handle default value for `allow_restricted_indices` in `elasticstack_elasticsearch_security_api_key` ([#1315](https://github.com/elastic/terraform-provider-elasticstack/pull/1315))
23+
- Fixed `nil` reference in kibana synthetics API client in case of response errors ([#1320](https://github.com/elastic/terraform-provider-elasticstack/pull/1320))
2224

2325
## [0.11.17] - 2025-07-21
2426

Makefile

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,6 @@ build-ci: ## build the terraform provider
5252
.PHONY: build
5353
build: lint build-ci ## build the terraform provider
5454

55-
5655
.PHONY: testacc
5756
testacc: ## Run acceptance tests
5857
TF_ACC=1 go test -v ./... -count $(ACCTEST_COUNT) -parallel $(ACCTEST_PARALLELISM) $(TESTARGS) -timeout $(ACCTEST_TIMEOUT)
@@ -254,7 +253,10 @@ golangci-lint:
254253

255254

256255
.PHONY: lint
257-
lint: setup golangci-lint check-fmt check-docs ## Run lints to check the spelling and common go patterns
256+
lint: setup golangci-lint fmt docs-generate ## Run lints to check the spelling and common go patterns
257+
258+
.PHONY: check-lint
259+
check-lint: setup golangci-lint check-fmt check-docs
258260

259261
.PHONY: fmt
260262
fmt: ## Format code

docs/resources/fleet_output.md

Lines changed: 235 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
21
---
32
# generated by https://github.com/hashicorp/terraform-plugin-docs
43
page_title: "elasticstack_fleet_output Resource - terraform-provider-elasticstack"
@@ -13,6 +12,8 @@ Creates a new Fleet Output.
1312

1413
## Example Usage
1514

15+
### Basic output
16+
1617
```terraform
1718
provider "elasticstack" {
1819
kibana {}
@@ -32,6 +33,168 @@ resource "elasticstack_fleet_output" "test_output" {
3233
}
3334
```
3435

36+
### Basic Kafka output
37+
38+
```terraform
39+
terraform {
40+
required_providers {
41+
elasticstack = {
42+
source = "elastic/elasticstack"
43+
version = "~> 0.11"
44+
}
45+
}
46+
}
47+
48+
provider "elasticstack" {
49+
elasticsearch {}
50+
kibana {}
51+
}
52+
53+
# Basic Kafka Fleet Output
54+
resource "elasticstack_fleet_output" "kafka_basic" {
55+
name = "Basic Kafka Output"
56+
output_id = "kafka-basic-output"
57+
type = "kafka"
58+
default_integrations = false
59+
default_monitoring = false
60+
61+
hosts = [
62+
"kafka:9092"
63+
]
64+
65+
# Basic Kafka configuration
66+
kafka = {
67+
auth_type = "user_pass"
68+
username = "kafka_user"
69+
password = "kafka_password"
70+
topic = "elastic-beats"
71+
partition = "hash"
72+
compression = "gzip"
73+
required_acks = 1
74+
75+
headers = [
76+
{
77+
key = "environment"
78+
value = "production"
79+
}
80+
]
81+
}
82+
}
83+
```
84+
85+
### Advanced Kafka output
86+
87+
```terraform
88+
terraform {
89+
required_providers {
90+
elasticstack = {
91+
source = "elastic/elasticstack"
92+
version = "~> 0.11"
93+
}
94+
}
95+
}
96+
97+
provider "elasticstack" {
98+
elasticsearch {}
99+
kibana {}
100+
}
101+
102+
# Advanced Kafka Fleet Output with SSL authentication
103+
resource "elasticstack_fleet_output" "kafka_advanced" {
104+
name = "Advanced Kafka Output"
105+
output_id = "kafka-advanced-output"
106+
type = "kafka"
107+
default_integrations = false
108+
default_monitoring = false
109+
110+
hosts = [
111+
"kafka1:9092",
112+
"kafka2:9092",
113+
"kafka3:9092"
114+
]
115+
116+
# Advanced Kafka configuration
117+
kafka = {
118+
auth_type = "ssl"
119+
topic = "elastic-logs"
120+
partition = "round_robin"
121+
compression = "snappy"
122+
required_acks = -1
123+
broker_timeout = 10
124+
timeout = 30
125+
version = "2.6.0"
126+
client_id = "elastic-beats-client"
127+
128+
# Custom headers for message metadata
129+
headers = [
130+
{
131+
key = "datacenter"
132+
value = "us-west-1"
133+
},
134+
{
135+
key = "service"
136+
value = "beats"
137+
},
138+
{
139+
key = "environment"
140+
value = "production"
141+
}
142+
]
143+
144+
# Hash-based partitioning
145+
hash = {
146+
hash = "host.name"
147+
random = false
148+
}
149+
150+
# SASL configuration
151+
sasl = {
152+
mechanism = "SCRAM-SHA-256"
153+
}
154+
}
155+
156+
# SSL configuration (reusing common SSL block)
157+
ssl = {
158+
certificate_authorities = [
159+
file("${path.module}/ca.crt")
160+
]
161+
certificate = file("${path.module}/client.crt")
162+
key = file("${path.module}/client.key")
163+
}
164+
165+
# Additional YAML configuration for advanced settings
166+
config_yaml = yamlencode({
167+
"ssl.verification_mode" = "full"
168+
"ssl.supported_protocols" = ["TLSv1.2", "TLSv1.3"]
169+
"max.message.bytes" = 1000000
170+
})
171+
}
172+
173+
# Example showing round-robin partitioning with event grouping
174+
resource "elasticstack_fleet_output" "kafka_round_robin" {
175+
name = "Kafka Round Robin Output"
176+
output_id = "kafka-round-robin-output"
177+
type = "kafka"
178+
default_integrations = false
179+
default_monitoring = false
180+
181+
hosts = ["kafka:9092"]
182+
183+
kafka = {
184+
auth_type = "none"
185+
topic = "elastic-metrics"
186+
partition = "round_robin"
187+
compression = "lz4"
188+
189+
round_robin = [
190+
{
191+
group_events = 100
192+
}
193+
]
194+
}
195+
}
196+
```
197+
35198
<!-- schema generated by tfplugindocs -->
36199
## Schema
37200

@@ -48,14 +211,83 @@ resource "elasticstack_fleet_output" "test_output" {
48211
- `default_integrations` (Boolean) Make this output the default for agent integrations.
49212
- `default_monitoring` (Boolean) Make this output the default for agent monitoring.
50213
- `hosts` (List of String) A list of hosts.
214+
- `kafka` (Attributes) Kafka-specific configuration. (see [below for nested schema](#nestedatt--kafka))
51215
- `output_id` (String) Unique identifier of the output.
52-
- `ssl` (Block List) SSL configuration. (see [below for nested schema](#nestedblock--ssl))
216+
- `ssl` (Attributes) SSL configuration. (see [below for nested schema](#nestedatt--ssl))
53217

54218
### Read-Only
55219

56220
- `id` (String) The ID of this resource.
57221

58-
<a id="nestedblock--ssl"></a>
222+
<a id="nestedatt--kafka"></a>
223+
### Nested Schema for `kafka`
224+
225+
Optional:
226+
227+
- `auth_type` (String) Authentication type for Kafka output.
228+
- `broker_timeout` (Number) Kafka broker timeout.
229+
- `client_id` (String) Kafka client ID.
230+
- `compression` (String) Compression type for Kafka output.
231+
- `compression_level` (Number) Compression level for Kafka output.
232+
- `connection_type` (String) Connection type for Kafka output.
233+
- `hash` (Attributes) Hash configuration for Kafka partition. (see [below for nested schema](#nestedatt--kafka--hash))
234+
- `headers` (Attributes List) Headers for Kafka messages. (see [below for nested schema](#nestedatt--kafka--headers))
235+
- `key` (String) Key field for Kafka messages.
236+
- `partition` (String) Partition strategy for Kafka output.
237+
- `password` (String, Sensitive) Password for Kafka authentication.
238+
- `random` (Attributes) Random configuration for Kafka partition. (see [below for nested schema](#nestedatt--kafka--random))
239+
- `required_acks` (Number) Number of acknowledgments required for Kafka output.
240+
- `round_robin` (Attributes) Round robin configuration for Kafka partition. (see [below for nested schema](#nestedatt--kafka--round_robin))
241+
- `sasl` (Attributes) SASL configuration for Kafka authentication. (see [below for nested schema](#nestedatt--kafka--sasl))
242+
- `timeout` (Number) Timeout for Kafka output.
243+
- `topic` (String) Kafka topic.
244+
- `username` (String) Username for Kafka authentication.
245+
- `version` (String) Kafka version.
246+
247+
<a id="nestedatt--kafka--hash"></a>
248+
### Nested Schema for `kafka.hash`
249+
250+
Optional:
251+
252+
- `hash` (String) Hash field.
253+
- `random` (Boolean) Use random hash.
254+
255+
256+
<a id="nestedatt--kafka--headers"></a>
257+
### Nested Schema for `kafka.headers`
258+
259+
Required:
260+
261+
- `key` (String) Header key.
262+
- `value` (String) Header value.
263+
264+
265+
<a id="nestedatt--kafka--random"></a>
266+
### Nested Schema for `kafka.random`
267+
268+
Optional:
269+
270+
- `group_events` (Number) Number of events to group.
271+
272+
273+
<a id="nestedatt--kafka--round_robin"></a>
274+
### Nested Schema for `kafka.round_robin`
275+
276+
Optional:
277+
278+
- `group_events` (Number) Number of events to group.
279+
280+
281+
<a id="nestedatt--kafka--sasl"></a>
282+
### Nested Schema for `kafka.sasl`
283+
284+
Optional:
285+
286+
- `mechanism` (String) SASL mechanism.
287+
288+
289+
290+
<a id="nestedatt--ssl"></a>
59291
### Nested Schema for `ssl`
60292

61293
Required:

0 commit comments

Comments
 (0)