A Terraform provider for managing OpenShift Dedicated (OSD) clusters on Google Cloud Platform (GCP). The provider uses the OpenShift Cluster Manager (OCM) API to provision and manage OSD clusters with support for Workload Identity Federation (WIF), Private Service Connect (PSC), Shared VPC, and CMEK encryption.
Note: This provider is experimental software built and maintained by the Red Hat Managed OpenShift Black Belt team. It is not a supported Red Hat product and should not be considered production-ready. Use at your own risk; behavior and APIs may change without notice.
| Resource | Description |
|---|---|
osdgoogle_cluster |
Create and manage OSD clusters on GCP |
osdgoogle_cluster_admin |
HTPasswd identity provider with cluster-admin user |
osdgoogle_wif_config |
Workload Identity Federation configuration for OSD on GCP |
osdgoogle_machine_pool |
Machine pools for worker nodes |
osdgoogle_cluster_waiter |
Wait for a cluster to reach a desired state |
osdgoogle_dns_domain |
DNS domain reservation |
| Data Source | Description |
|---|---|
osdgoogle_versions |
List available OpenShift versions |
osdgoogle_machine_types |
List GCP machine types by region |
osdgoogle_regions |
List available GCP regions |
osdgoogle_wif_config |
Look up WIF config by display name or ID |
- Workload Identity Federation (WIF) – Use WIF instead of service account keys
- Private Service Connect (PSC) – Private connectivity to Red Hat services
- Shared VPC – Deploy into an existing shared VPC
- CMEK – Customer-managed encryption keys
- Shielded VM (Secure Boot) – Per-cluster or per-machine-pool
- Autoscaling – Min/max replicas for worker nodes
- Terraform >= 1.0
- Go 1.24+ (for building from source)
- OCM token from console.redhat.com
- GCP project with billing enabled and OSD entitlements
terraform {
required_providers {
osdgoogle = {
source = "registry.terraform.io/rh-mobb/osd-google"
version = ">= 0.0.1"
}
}
}See the Development Workflow section below for how to build and test the provider locally using dev_overrides.
The provider requires credentials to access the OCM API. You can use either a token (recommended for interactive use) or client credentials (for CI/CD or automation).
-
Log in to Red Hat Hybrid Cloud Console
-
Go to OpenShift → Token (direct link)
-
Click Load token or Copy to copy your offline token
-
Use it in the provider:
export OSDGOOGLE_TOKEN="your-token-here"
Or in Terraform:
provider "osdgoogle" { token = var.ocm_token }
Offline tokens are long-lived and can be refreshed automatically. Access tokens are short-lived (about 1 hour).
Client credentials (client_id + client_secret) are used for non-interactive or programmatic access (e.g. CI/CD pipelines). They use the OAuth2 client credentials grant.
To obtain client credentials for OCM:
- Contact your Red Hat account team or open a support case to request OAuth2 API credentials for programmatic OCM access
- Alternatively, if your organization has set up a service account or OAuth2 client in the Red Hat SSO realm (
redhat-external), use those credentials
Once you have them:
export OSDGOOGLE_CLIENT_ID="your-client-id"
export OSDGOOGLE_CLIENT_SECRET="your-client-secret"Or in Terraform (use variables for sensitive values):
provider "osdgoogle" {
client_id = var.ocm_client_id
client_secret = var.ocm_client_secret # mark as sensitive
}For more details, see Red Hat OCM CLI documentation or run ocm login --help.
You can authenticate using either a token or client credentials (same options as the OCM CLI):
provider "osdgoogle" {
token = var.ocm_token # or use OSDGOOGLE_TOKEN env var
}provider "osdgoogle" {
client_id = var.client_id # or use OSDGOOGLE_CLIENT_ID env var
client_secret = var.client_secret # or use OSDGOOGLE_CLIENT_SECRET env var
}provider "osdgoogle" {
token = var.ocm_token # OR client_id + client_secret
url = "https://api.openshift.com"
token_url = "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token"
trusted_cas = file("path/to/ca.pem")
insecure = false
}| Argument | Description |
|---|---|
token |
OCM offline or access token (sensitive). Use with OSDGOOGLE_TOKEN env var. |
client_id |
OAuth client identifier for client credentials flow. Use with OSDGOOGLE_CLIENT_ID env var. |
client_secret |
OAuth client secret (sensitive). Use with OSDGOOGLE_CLIENT_SECRET env var. |
url |
OCM API URL (default: https://api.openshift.com) |
token_url |
OpenID token endpoint |
trusted_cas |
PEM CA bundle for TLS |
insecure |
Skip TLS verification (not for production) |
provider "osdgoogle" {
token = var.ocm_token
}
resource "osdgoogle_cluster" "example" {
name = "my-osd-cluster"
cloud_region = "us-central1"
gcp_project_id = var.gcp_project_id
version = "4.16.1"
compute_nodes = 3
compute_machine_type = "custom-4-16384"
}
output "api_url" {
value = osdgoogle_cluster.example.api_url
}
output "console_url" {
value = osdgoogle_cluster.example.console_url
}CCS clusters (your own GCP project) require wif_config_id or gcp_authentication. Create the WIF config in OCM using terraform/wif_config/ before provisioning a cluster — see cluster for an example.
The examples use the Terraform Registry source by default.
For local development, use dev_overrides so Terraform uses your local build without changing any example files — see Development Workflow.
| Example | Description |
|---|---|
| cluster | Basic cluster with OCM managd VPC |
| cluster_with_vpc | Cluster with module-managed VPC (BYOVPC) |
| cluster_psc | Cluster with Private Service Connect and Secure Boot |
| cluster_shared_vpc | Cluster using a Shared VPC |
| cluster_multi_az | Multi-AZ cluster with bare metal machine pool |
Every make example.<name> target handles the full lifecycle — WIF config (terraform/wif_config/) is applied first, then the cluster. On destroy, the cluster is destroyed first, then the WIF config:
make build
export OSDGOOGLE_TOKEN="your-token"
gcloud auth application-default login
make example.cluster # Create WIF + cluster
make example.cluster.destroy # Destroy cluster + WIFTo test with your local provider build (build, install to ~/.terraform.d/plugins, re-init, then run), use the dev.* targets:
make dev.cluster.apply # Apply with local provider build
make dev.cluster.plan # Plan with local provider build
make dev.cluster.destroy # Destroy with local provider buildSet gcp_project_id and cluster_name via TF_VAR_gcp_project_id / TF_VAR_cluster_name, or uncomment them in each example’s terraform.tfvars and in terraform/wif_config/terraform.tfvars (same cluster_name in both). The Makefile preflight accepts a project from gcloud config or TF_VAR_gcp_project_id. See examples/cluster/README.md for details.
When developing with an AI coding assistant (Cursor, Claude, Copilot, etc.), clone the upstream reference repositories into references/ before starting. These repos are gitignored and provide agents with offline context for the OCM API, the Go SDK, and the canonical RHCS provider structure — reducing hallucinations and improving code quality significantly.
make referencesRun the same command at any time to pull the latest changes from each repo's default branch. See AGENTS.md for a description of each reference and what it is useful for.
The recommended way to develop and test the provider locally is with Terraform's dev_overrides.
This lets you build the provider once and use it in any example directory without changing the required_providers source or running terraform init.
- Build the provider and get the
~/.terraformrcsnippet:
make dev-setup- Add the printed block to
~/.terraformrc(create the file if it doesn't exist):
provider_installation {
dev_overrides {
"registry.terraform.io/rh-mobb/osd-google" = "/path/to/terraform-provider-osd-google"
}
direct {}
}Replace the path with the actual repo directory printed by make dev-setup.
After the one-time setup, the dev cycle is:
make build
make example.cluster # or: cd examples/cluster && terraform planNo terraform init is needed when using dev_overrides — Terraform finds the provider binary directly.
If you prefer not to use dev_overrides, the dev.* targets build and install the provider to ~/.terraform.d/plugins, clear lock files, re-init, and then run:
export OSDGOOGLE_TOKEN="your-token"
gcloud auth application-default login
make dev.cluster.apply # Install provider + WIF + cluster
make dev.cluster.plan # Plan only
make dev.cluster.destroy # Destroy cluster + WIFUse dev.<example> for any example: dev.cluster, dev.cluster_baremetal, dev.cluster_with_vpc, dev.cluster_psc, dev.cluster_shared_vpc, dev.cluster_multi_az. Set gcp_project_id and cluster_name via TF_VAR_* or terraform.tfvars. Each run uses the freshly built provider.
Note: Terraform prints a warning about
dev_overridesbeing active. This is expected and safe to ignore during development.
When you're done developing, remove or comment out the dev_overrides block in ~/.terraformrc to go back to using the registry provider.
To step through provider code with a debugger:
go build -gcflags="all=-N -l" -o terraform-provider-osd-google .
dlv exec ./terraform-provider-osd-google -- -debugDelve prints a TF_REATTACH_PROVIDERS value. Export it in another terminal:
export TF_REATTACH_PROVIDERS='<value printed by delve>'
cd examples/cluster
terraform apply # attaches to the running provider processSet TF_LOG to see provider-level debug output:
TF_LOG=DEBUG terraform apply
TF_LOG_PROVIDER=TRACE terraform plan # provider logs only (no Terraform core noise)For local development before the provider is published, use the local plugins directory:
- Install the provider:
make install-
Terraform automatically checks
~/.terraform.d/pluginsfor local providers. Ensure noprovider_installationblock in~/.terraformrcoverrides this. If you haddev_overridesfor the registry source, remove it. -
Run
terraform initin the example directory.
# Unit tests (no infrastructure required)
make unit-test
# Subsystem tests (uses OCM mock server; requires make install)
make subsystem-test
# Acceptance tests (real OCM + GCP; requires OCM_TOKEN and GCP_PROJECT_ID)
make acceptance-testmake tools # install tfplugindocs
make docs # generate docs in docs/ from templates and schemaRun make docs before every PR when you change provider schema, resources, data sources, or templates. CI fails if docs/ is out of date.
make fmtProvider documentation is generated in the docs/ directory:
.
├── main.go # Provider entry point
├── provider/
│ ├── provider.go # Provider schema and configuration
│ ├── cluster/ # osdgoogle_cluster
│ ├── wif_config/ # osdgoogle_wif_config
│ ├── machine_pool/ # osdgoogle_machine_pool
│ ├── cluster_waiter/ # osdgoogle_cluster_waiter
│ ├── dns_domain/ # osdgoogle_dns_domain
│ ├── datasources/ # versions, machine_types, regions
│ └── common/ # Shared helpers
├── terraform/ # Shared Terraform configs (applied before examples)
│ └── wif_config/ # WIF config in OCM ([README](terraform/wif_config/README.md) explains why separate apply)
├── subsystem/ # OCM mock integration tests
├── acceptance/ # Real API acceptance tests
├── examples/ # Example configurations
└── docs/ # Generated documentation
See CONTRIBUTING.md for development setup, code style, and how to submit changes. By participating, you agree to our Code of Conduct.
Copyright (c) 2025 Red Hat, Inc.
Licensed under the Apache License, Version 2.0. See LICENSE for the full text.