A Terraform provider for managing resources through the AxonOps platform. This provider enables Infrastructure as Code (IaC) management of Kafka topics, ACLs, connectors, schemas, Cassandra backups, healthchecks, alerting, and more.
- Topics: Create, update, and delete Kafka topics with custom configurations
- ACLs: Manage Kafka Access Control Lists for fine-grained permissions
- Connectors: Deploy and manage Kafka Connect connectors
- Schemas: Register and version schemas in Schema Registry (AVRO, Protobuf, JSON)
git clone https://github.com/axonops/axonops-tf.git
cd axonops-tf
go build -o terraform-provider-axonopsFor local development, add to ~/.terraformrc:
provider_installation {
dev_overrides {
"axonops/axonops" = "/path/to/axonops-tf"
}
direct {}
}Add the provider to your Terraform configuration and run terraform init:
terraform {
required_providers {
axonops = {
source = "axonops/axonops"
}
}
}
provider "axonops" {
api_key = "your-api-key" # Required for AxonOps SaaS
org_id = "your-org-id" # Required
}terraform initprovider "axonops" {
api_key = "your-api-key" # Required for AxonOps SaaS
axonops_host = "axonops.example.com" # Default: dash.axonops.cloud/<org_id>
axonops_protocol = "https" # Default: https
org_id = "your-org-id" # Required
token_type = "Bearer" # Options: Bearer (default), AxonApi
}| Attribute | Type | Required | Default | Description |
|---|---|---|---|---|
api_key |
string | No* | - | API key for authentication (*required for SaaS) |
axonops_host |
string | No | dash.axonops.cloud/<org_id> | AxonOps server hostname |
axonops_protocol |
string | No | https | Protocol (http/https) |
org_id |
string | Yes | - | Organization ID |
token_type |
string | No | Bearer | Authorization header type |
Manages Kafka topics.
resource "axonops_kafka_topic" "example" {
name = "my-topic"
partitions = 3
replication_factor = 2
cluster_name = "my-kafka-cluster"
config = {
cleanup_policy = "delete"
retention_ms = "604800000"
delete_retention_ms = "86400000"
}
}| Attribute | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Topic name |
partitions |
int | Yes | Number of partitions (cannot be changed after creation) |
replication_factor |
int | Yes | Replication factor (cannot be changed after creation) |
cluster_name |
string | Yes | Kafka cluster name |
config |
map | No | Topic configurations (use underscores, converted to dots) |
Manages Kafka ACLs.
resource "axonops_kafka_acl" "example" {
cluster_name = "my-kafka-cluster"
resource_type = "TOPIC"
resource_name = "my-topic"
resource_pattern_type = "LITERAL"
principal = "User:alice"
host = "*"
operation = "READ"
permission_type = "ALLOW"
}| Attribute | Type | Required | Default | Description |
|---|---|---|---|---|
cluster_name |
string | Yes | - | Kafka cluster name |
resource_type |
string | Yes | - | ANY, TOPIC, GROUP, CLUSTER, TRANSACTIONAL_ID, DELEGATION_TOKEN, USER |
resource_name |
string | Yes | - | Name of the resource |
resource_pattern_type |
string | No | LITERAL | ANY, MATCH, LITERAL, PREFIXED |
principal |
string | Yes | - | Principal (e.g., User:alice) |
host |
string | No | * | Host pattern |
operation |
string | Yes | - | READ, WRITE, CREATE, DELETE, ALTER, DESCRIBE, etc. |
permission_type |
string | Yes | - | ANY, DENY, ALLOW |
Manages Kafka Connect connectors.
resource "axonops_kafka_connect_connector" "example" {
cluster_name = "my-kafka-cluster"
connect_cluster_name = "my-connect-cluster"
name = "my-connector"
config = {
"connector.class" = "org.apache.kafka.connect.file.FileStreamSourceConnector"
"tasks.max" = "1"
"file" = "/tmp/input.txt"
"topic" = "my-topic"
}
}| Attribute | Type | Required | Description |
|---|---|---|---|
cluster_name |
string | Yes | Kafka cluster name |
connect_cluster_name |
string | Yes | Kafka Connect cluster name |
name |
string | Yes | Connector name |
config |
map | Yes | Connector configuration |
type |
string | Computed | Connector type (source/sink) |
Manages Schema Registry schemas.
resource "axonops_schema" "example" {
cluster_name = "my-kafka-cluster"
subject = "my-topic-value"
schema_type = "AVRO"
schema = jsonencode({
type = "record"
name = "MyRecord"
namespace = "com.example"
fields = [
{ name = "id", type = "int" },
{ name = "name", type = "string" }
]
})
}| Attribute | Type | Required | Description |
|---|---|---|---|
cluster_name |
string | Yes | Kafka cluster name |
subject |
string | Yes | Schema subject (e.g., topic-name-value) |
schema |
string | Yes | Schema definition |
schema_type |
string | Yes | AVRO, PROTOBUF, or JSON |
schema_id |
int | Computed | Schema ID from registry |
version |
int | Computed | Schema version number |
terraform {
required_providers {
axonops = {
source = "axonops/axonops"
}
}
}
provider "axonops" {
api_key = var.axonops_api_key
org_id = "my-organization"
# axonops_host defaults to dash.axonops.cloud/<org_id>
# token_type defaults to Bearer
}
# Create a topic
resource "axonops_kafka_topic" "events" {
name = "user-events"
partitions = 6
replication_factor = 3
cluster_name = "production-kafka"
config = {
retention_ms = "604800000"
cleanup_policy = "delete"
}
}
# Create an ACL for the topic
resource "axonops_kafka_acl" "events_read" {
cluster_name = "production-kafka"
resource_type = "TOPIC"
resource_name = axonops_kafka_topic.events.name
resource_pattern_type = "LITERAL"
principal = "User:consumer-app"
operation = "READ"
permission_type = "ALLOW"
}
# Register a schema for the topic
resource "axonops_schema" "events_value" {
cluster_name = "production-kafka"
subject = "${axonops_kafka_topic.events.name}-value"
schema_type = "AVRO"
schema = jsonencode({
type = "record"
name = "UserEvent"
namespace = "com.example.events"
fields = [
{ name = "user_id", type = "string" },
{ name = "event_type", type = "string" },
{ name = "timestamp", type = "long" }
]
})
}All resources support importing existing configurations into Terraform state.
| Resource | Import ID Format |
|---|---|
axonops_kafka_topic |
cluster_name/topic_name |
axonops_kafka_acl |
cluster_name/resource_type/resource_name/resource_pattern_type/principal/host/operation/permission_type |
axonops_kafka_connect_connector |
cluster_name/connect_cluster_name/connector_name |
axonops_schema |
cluster_name/subject |
axonops_logcollector |
cluster_name/log_collector_name |
axonops_healthcheck_tcp |
cluster_name/healthcheck_name |
axonops_healthcheck_http |
cluster_name/healthcheck_name |
axonops_healthcheck_shell |
cluster_name/healthcheck_name |
# Import a topic
terraform import axonops_kafka_topic.my_topic "my-cluster/my-topic"
# Import an ACL
terraform import axonops_kafka_acl.my_acl "my-cluster/TOPIC/my-topic/LITERAL/User:alice/*/READ/ALLOW"
# Import a connector
terraform import axonops_kafka_connect_connector.my_connector "my-cluster/my-connect-cluster/my-connector"
# Import a schema
terraform import axonops_schema.my_schema "my-cluster/my-topic-value"
# Import a log collector
terraform import axonops_logcollector.my_logs "my-cluster/My Log Collector"
# Import healthchecks
terraform import axonops_healthcheck_tcp.my_check "my-cluster/My TCP Check"
terraform import axonops_healthcheck_http.my_http "my-cluster/My HTTP Check"
terraform import axonops_healthcheck_shell.my_shell "my-cluster/My Shell Check"For importing an entire cluster, use the provided import script:
# Usage
./scripts/import-cluster.sh <axonops_host> <org_id> <cluster_name> <api_key> [output_dir]
# Example
./scripts/import-cluster.sh axonops.example.com:8080 myorg mycluster abc123 ./imported
# The script will:
# 1. Generate .tf files for all resources (topics, ACLs, log collectors, healthchecks)
# 2. Create an import_commands.sh script with all terraform import commands
# 3. Generate a provider.tf with your configurationAfter running the script:
- Review the generated
.tffiles in the output directory - Set your API key:
export TF_VAR_axonops_api_key='your-api-key' - Initialize Terraform:
terraform init - Run the import commands:
bash import_commands.sh - Verify the state:
terraform plan(should show no changes)
make build# Configure main.tf with your settings
terraform init
terraform plan
terraform applyApache License 2.0
Contributions are welcome! Please open an issue or submit a pull request.