A namespace-scoped CR that defines the resources and input parameters required for cluster provisioning. It is delivered via GitOps and reconciled by the O‑Cloud Manager.
The CRD's spec fields include:
- name: Specifies the base name of the ClusterTemplate.
- version: Defines the version of the ClusterTemplate.
- release: OCP release version that will be installed with this template. The Git layout uses a matching directory name
version_4.Y.Z/; keep the release here and the directory version aligned. - description: A description of the ClusterTemplate.
- templates: Contains multiple sub-templates.
- hwTemplate: (Optional) References the HardwareTemplate resource containing the hardware template used for node allocation. See details below.
- clusterInstanceDefaults: References the ConfigMap containing default values for ClusterInstance.
- policyTemplateDefaults: References the ConfigMap containing default values for ACM policy templates.
- templateParameterSchema: Specifies the OpenAPI v3 schema that defines which parameters the SMO must provide and which values are accepted in the ProvisioningRequest.
- templateId: A unique identifier (UUID) for the ClusterTemplate, automatically generated by the O-Cloud Manager if not provided.
- characteristics(FFS): A list of key-value pairs describing characteristics associated with the template.
The ClusterTemplate references defaults stored in Git (for example, installation and policy defaults) and, through its templateParameterSchema, specifies the inputs that the SMO can supply in the ProvisioningRequest.
At provisioning time, the O‑Cloud Manager validates the SMO‑supplied values against the schema and merges them with the template’s defaults to render the concrete artifacts used to install and configure the cluster.
A complete example of ClusterTemplate is available under the GitOps sample content: sno-ran-du-v4-Y-Z-1.yaml.
The metadata.name of the ClusterTemplate CR must follow the format <name.version> and must be unique across all namespaces. The combination of name and version uniquely identifies a template.
The schema defined in the templateParameterSchema must include the following mandatory parameters for the cluster provisioning process:
- nodeClusterName: Specifies the name of the node cluster.
- oCloudSiteId: Specifies the oCloud site identifier.
- policyTemplateParameters: A subschema that defines the parameters for cluster configuration.
- clusterInstanceParameters: A subschema for ClusterInstance, defining the parameters that are allowed in the ProvisioningRequest for cluster installation.
The template references hardware artifacts that the O‑Cloud Metal3 hardware plugin uses to allocate and prepare bare‑metal nodes.
Note
spec.templates.hwTemplate can be optional. In scenarios where the hwTemplate is not provided, hardware provisioning will not be performed, and hardware-related parameters for each node
(e.g, bmcAddress, bmcCredentialsDetails, bootMACAddress, nodeNetwork.interfaces[*].macAddress) should be specified in the ProvisioningRequest. See this example of a ClusterTemplate without hwTemplate.
HardwareProfile describes the desired hardware state for a class of servers, such as BIOS settings and target firmware levels. It is applied by the Metal3 hardware plugin during Day‑0 (initial provision) and can also be used for Day‑2 updates. Example profiles are provided under hardwareprofiles.
HardwareTemplate defines the node groups that the cluster needs and how matching hosts are selected from the inventory. For each group you specify attributes such as the role and group name, and you provide a resourceSelector with label criteria.
You can specify either fully-qualified labels such as resourceselector.clcm.openshift.io/server-type=Dell-R740 or short keys like server-type=Dell-R740; the controller will automatically prepend the resourceselector.clcm.openshift.io/ prefix when it is omitted.
At runtime, the Metal3 hardware plugin finds BareMetalHosts whose labels satisfy the selector and allocates them to the request.
[TODO] Add documentation for the hardware data selector
The template also carries a bootInterfaceLabel that identifies which NIC is used for booting. This label is later matched against the interface list to resolve the boot MAC address. For more details, refer to ClusterInstance defaults ConfigMap.
The template includes a hardwarePluginRef, which points to metal3-hwplugin (automatically created by the operator), and a reference to the HardwareProfile that should be applied to the matched hosts. The ClusterTemplate references the HardwareTemplate at spec.templates.hwTemplate.
Example files are under hardwaretemplates. For details about server onboarding, refer to Server Onboarding.
Cluster installation defaults that are common across clusters are provided in a ConfigMap and referenced by the ClusterTemplate. These defaults can include node configuration, networking, and other baseline settings. All fields must comply with the SiteConfig ClusterInstance schema.
Each node’s boot interface in the interfaces list must include a label that matches the spec.bootInterfaceLabel in the associated HardwareTemplate. This allows the O-Cloud manager to resolve the boot NIC’s MAC address from the NIC-to-MAC mappings retrieved from the Metal3 hardware plugin.
For example,
nodes:
- role: master
nodeNetwork:
interfaces:
- name: eno1
label: bootable-interface
- name: eno2A complete example of ClusterInstance defaults can be found in clusterinstance-defaults-v1.yaml.
Configuration is driven by ACM policies that are rendered from policy templates (See PolicyTemplate example). The ClusterTemplate references a ConfigMap with default input values for these templates.
All parameters used in the policy templates must be declared in the schema templateParameterSchema.policyTemplateParameters and provided either through the PolicyTemplate default ConfigMap or via the ProvisioningRequest (spec.templateParameters.policyTemplateParameters).
A complete example of PolicyTemplate defaults can be found in policytemplates-defaults-v1.yaml.
When a ClusterTemplate is created, O-Cloud Manager validates the following to ensure:
- The name is unique across all namespaces.
- All referenced ConfigMaps and resources exist.
- The required parameters are present in the schema.
Once validation succeeds, the status condition ClusterTemplateValidated is set to True. After successful validation, the ClusterTemplate becomes immutable — any changes to its spec are disallowed and will be rejected. The referenced ConfigMaps are also immutable after validation.
status:
conditions:
- lastTransitionTime: "2024-10-19T00:00:06Z"
message: The cluster template validation succeeded
reason: Completed
status: "True"
type: ClusterTemplateValidatedYou can view ClusterTemplates either from the oc CLI or via the O‑Cloud Manager REST APIs.
oc get clustertemplates.clcm.openshift.io -A
oc get clustertemplates.clcm.openshift.io -n <namespace> <name> -o yamlFirst, acquire an authorization token as described in Testing API endpoints on a cluster. Then, get API endpoint URLs.
export API_URI=$(oc get route -n oran-o2ims -o jsonpath='{.items[?(@.spec.path=="/o2ims-infrastructureArtifacts")].spec.host}')
export BASE_URL="https://${API_URI}/o2ims-infrastructureArtifacts/v1"curl -ks -H "Authorization: Bearer ${MY_TOKEN}" \
"${BASE_URL}/managedInfrastructureTemplates" | jqcurl -ks -H "Authorization: Bearer ${MY_TOKEN}" \
"${BASE_URL}/managedInfrastructureTemplates?filter=(eq,version,v4-18-5-v1)%3b(eq,name,sno-ran-du)" | jqcurl -ks -H "Authorization: Bearer ${MY_TOKEN}" \
"${BASE_URL}/managedInfrastructureTemplates/sno-ran-du.v4-18-5-v1" | jqcurl -ks -H "Authorization: Bearer ${MY_TOKEN}" \
"${BASE_URL}/managedInfrastructureTemplates/sno-ran-du.v4-18-5-v1/defaults" | jq