Skip to content

axonops/terraform-provider-axonops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AxonOps Terraform Provider

A Terraform provider for managing resources through the AxonOps platform. This provider enables Infrastructure as Code (IaC) management of Kafka topics, ACLs, connectors, schemas, Cassandra backups, healthchecks, alerting, and more.

Features

  • Topics: Create, update, and delete Kafka topics with custom configurations
  • ACLs: Manage Kafka Access Control Lists for fine-grained permissions
  • Connectors: Deploy and manage Kafka Connect connectors
  • Schemas: Register and version schemas in Schema Registry (AVRO, Protobuf, JSON)

Requirements

  • Terraform >= 1.0
  • Go >= 1.23 (for building from source)
  • Access to an AxonOps instance

Installation

Building from Source

git clone https://github.com/axonops/axonops-tf.git
cd axonops-tf
go build -o terraform-provider-axonops

Development Override

For local development, add to ~/.terraformrc:

provider_installation {
  dev_overrides {
    "hashicorp/axonops" = "/path/to/axonops-tf"
  }
  direct {}
}

Install to Local Plugin Directory

To install the provider to your local Terraform plugin cache, you can either download a pre-built release or build from source.

Option 1: Download from GitHub Releases

# Set variables for your platform
OS="linux"       # or "darwin" for macOS, "windows" for Windows
ARCH="amd64"     # or "arm64" for ARM-based systems
VERSION="1.0.0"

# Download the release
curl -LO "https://github.com/axonops/axonops-tf/releases/download/v${VERSION}/terraform-provider-axonops_v${VERSION}_${OS}_${ARCH}.zip"

# Create the plugin directory
mkdir -p ~/.terraform.d/plugins/registry.terraform.io/hashicorp/axonops/${VERSION}/${OS}_${ARCH}/

# Unzip to the plugin directory
unzip "terraform-provider-axonops_v${VERSION}_${OS}_${ARCH}.zip" -d ~/.terraform.d/plugins/registry.terraform.io/hashicorp/axonops/${VERSION}/${OS}_${ARCH}/

Option 2: Build from Source

# Set variables for your platform
OS="linux"       # or "darwin" for macOS, "windows" for Windows
ARCH="amd64"     # or "arm64" for ARM-based systems
VERSION="1.0.0"

# Clone and build
git clone https://github.com/axonops/axonops-tf.git
cd axonops-tf
go build -o terraform-provider-axonops

# Create the plugin directory
mkdir -p ~/.terraform.d/plugins/registry.terraform.io/hashicorp/axonops/${VERSION}/${OS}_${ARCH}/

# Copy the binary
cp terraform-provider-axonops ~/.terraform.d/plugins/registry.terraform.io/hashicorp/axonops/${VERSION}/${OS}_${ARCH}/

Configure Terraform

Reference the provider in your Terraform configuration:

terraform {
  required_providers {
    axonops = {
      source  = "hashicorp/axonops"
      version = "1.0.0"
    }
  }
}

Run terraform init to initialize the provider.

Provider Configuration

provider "axonops" {
  api_key          = "your-api-key"        # Required for AxonOps SaaS
  axonops_host     = "axonops.example.com" # Default: dash.axonops.cloud/<org_id>
  axonops_protocol = "https"               # Default: https
  org_id           = "your-org-id"         # Required
  token_type       = "Bearer"              # Options: Bearer (default), AxonApi
}
Attribute Type Required Default Description
api_key string No* - API key for authentication (*required for SaaS)
axonops_host string No dash.axonops.cloud/<org_id> AxonOps server hostname
axonops_protocol string No https Protocol (http/https)
org_id string Yes - Organization ID
token_type string No Bearer Authorization header type

Resources

axonops_kafka_topic

Manages Kafka topics.

resource "axonops_kafka_topic" "example" {
  name               = "my-topic"
  partitions         = 3
  replication_factor = 2
  cluster_name       = "my-kafka-cluster"
  config = {
    cleanup_policy      = "delete"
    retention_ms        = "604800000"
    delete_retention_ms = "86400000"
  }
}
Attribute Type Required Description
name string Yes Topic name
partitions int Yes Number of partitions (cannot be changed after creation)
replication_factor int Yes Replication factor (cannot be changed after creation)
cluster_name string Yes Kafka cluster name
config map No Topic configurations (use underscores, converted to dots)

axonops_acl

Manages Kafka ACLs.

resource "axonops_kafka_acl" "example" {
  cluster_name          = "my-kafka-cluster"
  resource_type         = "TOPIC"
  resource_name         = "my-topic"
  resource_pattern_type = "LITERAL"
  principal             = "User:alice"
  host                  = "*"
  operation             = "READ"
  permission_type       = "ALLOW"
}
Attribute Type Required Default Description
cluster_name string Yes - Kafka cluster name
resource_type string Yes - ANY, TOPIC, GROUP, CLUSTER, TRANSACTIONAL_ID, DELEGATION_TOKEN, USER
resource_name string Yes - Name of the resource
resource_pattern_type string No LITERAL ANY, MATCH, LITERAL, PREFIXED
principal string Yes - Principal (e.g., User:alice)
host string No * Host pattern
operation string Yes - READ, WRITE, CREATE, DELETE, ALTER, DESCRIBE, etc.
permission_type string Yes - ANY, DENY, ALLOW

axonops_connector

Manages Kafka Connect connectors.

resource "axonops_kafka_connect_connector" "example" {
  cluster_name         = "my-kafka-cluster"
  connect_cluster_name = "my-connect-cluster"
  name                 = "my-connector"
  config = {
    "connector.class" = "org.apache.kafka.connect.file.FileStreamSourceConnector"
    "tasks.max"       = "1"
    "file"            = "/tmp/input.txt"
    "topic"           = "my-topic"
  }
}
Attribute Type Required Description
cluster_name string Yes Kafka cluster name
connect_cluster_name string Yes Kafka Connect cluster name
name string Yes Connector name
config map Yes Connector configuration
type string Computed Connector type (source/sink)

axonops_schema

Manages Schema Registry schemas.

resource "axonops_schema" "example" {
  cluster_name = "my-kafka-cluster"
  subject      = "my-topic-value"
  schema_type  = "AVRO"
  schema       = jsonencode({
    type      = "record"
    name      = "MyRecord"
    namespace = "com.example"
    fields    = [
      { name = "id", type = "int" },
      { name = "name", type = "string" }
    ]
  })
}
Attribute Type Required Description
cluster_name string Yes Kafka cluster name
subject string Yes Schema subject (e.g., topic-name-value)
schema string Yes Schema definition
schema_type string Yes AVRO, PROTOBUF, or JSON
schema_id int Computed Schema ID from registry
version int Computed Schema version number

Example Usage

terraform {
  required_providers {
    axonops = {
      source = "hashicorp/axonops"
    }
  }
}

provider "axonops" {
  api_key  = var.axonops_api_key
  org_id   = "my-organization"
  # axonops_host defaults to dash.axonops.cloud/<org_id>
  # token_type defaults to Bearer
}

# Create a topic
resource "axonops_kafka_topic" "events" {
  name               = "user-events"
  partitions         = 6
  replication_factor = 3
  cluster_name       = "production-kafka"
  config = {
    retention_ms   = "604800000"
    cleanup_policy = "delete"
  }
}

# Create an ACL for the topic
resource "axonops_kafka_acl" "events_read" {
  cluster_name          = "production-kafka"
  resource_type         = "TOPIC"
  resource_name         = axonops_kafka_topic.events.name
  resource_pattern_type = "LITERAL"
  principal             = "User:consumer-app"
  operation             = "READ"
  permission_type       = "ALLOW"
}

# Register a schema for the topic
resource "axonops_schema" "events_value" {
  cluster_name = "production-kafka"
  subject      = "${axonops_kafka_topic.events.name}-value"
  schema_type  = "AVRO"
  schema       = jsonencode({
    type      = "record"
    name      = "UserEvent"
    namespace = "com.example.events"
    fields    = [
      { name = "user_id", type = "string" },
      { name = "event_type", type = "string" },
      { name = "timestamp", type = "long" }
    ]
  })
}

Importing Existing Resources

All resources support importing existing configurations into Terraform state.

Import ID Formats

Resource Import ID Format
axonops_kafka_topic cluster_name/topic_name
axonops_kafka_acl cluster_name/resource_type/resource_name/resource_pattern_type/principal/host/operation/permission_type
axonops_kafka_connect_connector cluster_name/connect_cluster_name/connector_name
axonops_schema cluster_name/subject
axonops_logcollector cluster_name/log_collector_name
axonops_healthcheck_tcp cluster_name/healthcheck_name
axonops_healthcheck_http cluster_name/healthcheck_name
axonops_healthcheck_shell cluster_name/healthcheck_name

Import Examples

# Import a topic
terraform import axonops_kafka_topic.my_topic "my-cluster/my-topic"

# Import an ACL
terraform import axonops_kafka_acl.my_acl "my-cluster/TOPIC/my-topic/LITERAL/User:alice/*/READ/ALLOW"

# Import a connector
terraform import axonops_kafka_connect_connector.my_connector "my-cluster/my-connect-cluster/my-connector"

# Import a schema
terraform import axonops_schema.my_schema "my-cluster/my-topic-value"

# Import a log collector
terraform import axonops_logcollector.my_logs "my-cluster/My Log Collector"

# Import healthchecks
terraform import axonops_healthcheck_tcp.my_check "my-cluster/My TCP Check"
terraform import axonops_healthcheck_http.my_http "my-cluster/My HTTP Check"
terraform import axonops_healthcheck_shell.my_shell "my-cluster/My Shell Check"

Bulk Import Script

For importing an entire cluster, use the provided import script:

# Usage
./scripts/import-cluster.sh <axonops_host> <org_id> <cluster_name> <api_key> [output_dir]

# Example
./scripts/import-cluster.sh axonops.example.com:8080 myorg mycluster abc123 ./imported

# The script will:
# 1. Generate .tf files for all resources (topics, ACLs, log collectors, healthchecks)
# 2. Create an import_commands.sh script with all terraform import commands
# 3. Generate a provider.tf with your configuration

After running the script:

  1. Review the generated .tf files in the output directory
  2. Set your API key: export TF_VAR_axonops_api_key='your-api-key'
  3. Initialize Terraform: terraform init
  4. Run the import commands: bash import_commands.sh
  5. Verify the state: terraform plan (should show no changes)

Development

Building

make build

Testing

# Configure main.tf with your settings
terraform init
terraform plan
terraform apply

License

Apache License 2.0

Contributing

Contributions are welcome! Please open an issue or submit a pull request.