Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/set-variables-group-vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
:--- | :--- | :---
**installation_type** | Can be of type kvm or lpar. Some packages will be ignored for installation in case of non lpar based installation. | kvm
**controller_sudo_pass** | The password to the machine running Ansible (localhost). This will only be used for two things. To ensure you've installed the pre-requisite packages if you're on Linux, and to add the login URL to your /etc/hosts file. | Pas$w0rd!
**cex_device** | Specify the storage device type used for LUKS encryption. This setting determines enable cex MCO Ignition configuration will be applied. Use in combination with the cex parameter. [dasd, fcp, virt]

## 2 - LPAR(s)
**Variable Name** | **Description** | **Example**
Expand Down Expand Up @@ -409,3 +410,9 @@
**zvm.interface.ip** | IP addresses for to be used for zVM nodes | 192.168.10.1
**zvm.nodes.dasd.disk_id** | Disk id for dasd disk to be used for zVM node | 4404
**zvm.nodes.lun** | Disk details of fcp disk to be used for zVM node | 840a

## Crypto Express Card based LUKS encryption specific for zKVM ( Optional )
**Variable Name** | **Description** | **Example**
**cex** | Whether to enable cex based luks encryption, default to False
**cex_device** | Specify the storage device type used for LUKS encryption. This setting determines which MCO Ignition configuration will be applied from the defaults. Do not override the default value. Use in combination with the cex parameter. | [dasd, fcp, virt]
**cex_uuid_map** | This var is required only for KVM installations using vfio_ap mediated device. Omit it when deploying on LPAR installation. Use in combination with cex and cex_device. Specify guest hostname: "UUID:domain" UUID can be generated from uuidgen command and domain can be retrieved from lszcrypt | upi-cex-control-1: "68cd2d83-3eef-4e45-b22c-534f90b16cb9:00.0035"
13 changes: 13 additions & 0 deletions inventories/default/group_vars/all.yaml.template
Original file line number Diff line number Diff line change
Expand Up @@ -250,3 +250,16 @@ abi:
# (Optional) Proxy
# Pls check the documentation which vars are present (include examples). If use_proxy is set to true,
# than proxy_http, proxy_https and proxy_no must be set.


# Section 15 - CEX based Luks Encryption ( Optional )
cex: false
# cex_device: [dasd | fcp | virt]
# The following variable is required only when CEX use as vfio_ap mediate device in KVM guest.
# https://www.ibm.com/docs/en/linux-on-systems?topic=management-configuring-crypto-express-adapters-kvm-guests
# cex_uuid_map:
# hostname:UUID:domain
# hostname:UUID:domain
# Provide control and compute hostname with uuid and domain only for KVM based installation.
# UUID can be generated from `uuidgen` command and domain from lszcrypt -V
# eg upi-control-1:5c84eefb-cb45-4519-86d3-ba23e65e8896:12.0001
2 changes: 1 addition & 1 deletion inventories/default/hosts
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
[localhost]
127.0.0.1 ansible_connection=local
127.0.0.1 ansible_connection=local ansible_become_password=
6 changes: 6 additions & 0 deletions playbooks/3_setup_kvm_host.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@
vars_files:
- "{{ inventory_dir }}/group_vars/all.yaml"
vars:
ssh_target: ["{{ env.z.lpar1.ip }}","{{ env.z.lpar1.user }}","{{ env.z.lpar1.pass }}","{{ path_to_key_pair }}"]

Check failure on line 12 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

yaml[commas]

Too few spaces after comma
tasks:
- name: Include vars for the KVM host.
include_vars:

Check failure on line 15 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (include_vars).
file: "{{ inventory_dir }}/host_vars/{{ env.z.lpar1.hostname }}.yaml"

- name: Copy SSH key to KVM host.
import_role:

Check failure on line 19 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (import_role).
name: ssh_copy_id

- name: Copy SSH key to access KVM host 2
Expand All @@ -28,16 +28,16 @@
vars_files:
- "{{ inventory_dir }}/group_vars/all.yaml"
vars:
ssh_target: ["{{ env.z.lpar2.ip }}","{{ env.z.lpar2.user }}","{{ env.z.lpar2.pass }}","{{ path_to_key_pair }}"]

Check failure on line 31 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

yaml[commas]

Too few spaces after comma
tasks:
- name: Include vars for second KVM host.
include_vars:

Check failure on line 34 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (include_vars).
file: "{{ inventory_dir }}/host_vars/{{ env.z.lpar2.hostname }}.yaml"
when: env.z.lpar2.hostname is defined

- name: copy SSH key to second KVM host, if cluster is to be highly available.
tags: ssh_copy_id, ssh
import_role:

Check failure on line 40 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (import_role).
name: ssh_copy_id
when: env.z.lpar2.hostname is defined

Expand All @@ -50,16 +50,16 @@
vars_files:
- "{{ inventory_dir }}/group_vars/all.yaml"
vars:
ssh_target: ["{{ env.z.lpar3.ip }}","{{ env.z.lpar3.user }}","{{ env.z.lpar3.pass }}","{{ path_to_key_pair }}"]

Check failure on line 53 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

yaml[commas]

Too few spaces after comma
tasks:
- name: Include vars for third KVM host.
include_vars:

Check failure on line 56 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (include_vars).
file: "{{ inventory_dir }}/host_vars/{{ env.z.lpar3.hostname }}.yaml"
when: env.z.lpar3.hostname is defined

- name: copy SSH key to third KVM host, if cluster is to be highly available.
tags: ssh_copy_id, ssh
import_role:

Check failure on line 62 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

fqcn[action-core]

Use FQCN for builtin module actions (import_role).
name: ssh_copy_id
when: env.z.lpar3.hostname is defined

Expand All @@ -71,7 +71,7 @@
vars:
packages: "pkgs_kvm"
roles:
- { role: attach_subscription, when: env.redhat.manage_subscription }

Check failure on line 74 in playbooks/3_setup_kvm_host.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

yaml[colons]

Too many spaces after colon
- install_packages
- httpd
post_tasks:
Expand Down Expand Up @@ -188,3 +188,9 @@
roles:
- configure_storage
- { role: macvtap, when: env.network_mode | upper != 'NAT' }

- hosts: kvm_host
tags: setup, section_3
become: true
roles:
- { role: configure_cex, when: cex | bool and cex_uuid_map is defined }
40 changes: 40 additions & 0 deletions roles/configure_cex/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---

- name: Set cex_cards from cex_uuid_map values (uuid:domain only)
set_fact:
cex_cards: "{{ cex_uuid_map.values() | list | unique }}"

- name: Debug final list of CEX UUID assignments
debug:
var: cex_cards

- name: Create VFIO assignment script for all CEX cards
template:
src: assign_cards.sh.j2
dest: /tmp/assign_all_cex_cards.sh
mode: '0755'

- name: Execute VFIO assignment script
shell: /tmp/assign_all_cex_cards.sh
args:
executable: /bin/bash

- name: Housekeep temporary assignment script
file:
path: /tmp/assign_all_cex_cards.sh
state: absent

- name: Initialize empty cex_hostdev_map
set_fact:
cex_hostdev_map: {}

- name: Populate cex_hostdev_map with mdev format
set_fact:
cex_hostdev_map: "{{ cex_hostdev_map | combine({ item.key : 'mdev_' + (item.value.split(':')[0] | regex_replace('-', '_')) + '_matrix' }) }}"

Check warning on line 33 in roles/configure_cex/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ cex_hostdev_map | combine({ item.key : 'mdev_' + (item.value.split(':')[0] | regex_replace('-', '_')) + '_matrix' }) }} -> {{ cex_hostdev_map | combine({item.key: 'mdev_' + (item.value.split(':')[0] | regex_replace('-', '_')) + '_matrix'}) }}
loop: "{{ cex_uuid_map | dict2items }}"


- name: Save cex_hostdev_map to a file for reuse
copy:
dest: "/root/.cex_hostdev_map.json"
content: "{{ cex_hostdev_map | to_nice_json }}"
38 changes: 38 additions & 0 deletions roles/configure_cex/templates/assign_cards.sh.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#!/bin/bash
# Reference document for the cex configuration in zKVM
# https://www.ibm.com/docs/en/linux-on-systems?topic=management-configuring-crypto-express-adapters-kvm-guests

# Configure each CEX card
{% for entry in cex_cards %}
{% set uuid = entry.split(':')[0] %}
{% set matrix_val = entry.split(':')[1] %}
{% set adapter = matrix_val.split('.')[0] %}
{% set domain = matrix_val.split('.')[1] %}

uuid="{{ uuid }}"
matrix_val="{{ matrix_val }}"
adapter="{{ adapter }}"
domain="{{ domain }}"

uuid_path="/sys/devices/vfio_ap/matrix/$uuid"
matrix_file="$uuid_path/matrix"

if [ -d "$uuid_path" ]; then
if grep -q "{{ matrix_val }}" "$matrix_file" 2>/dev/null; then
echo "[INFO] UUID $uuid already configured with matrix {{ matrix_val }} — skipping."
else
echo "[WARN] UUID $uuid exists, but matrix entry '{{ matrix_val }}' not found!"
echo "[WARN] Please reboot the node and try again."
exit 1
fi
else
modprobe vfio_ap
echo 0x0 > /sys/bus/ap/apmask
echo 0x0 > /sys/bus/ap/aqmask
echo "[INFO] Creating UUID $uuid with adapter $adapter and domain $domain"
echo "$uuid" > /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/create
echo "0x$adapter" > "$uuid_path/assign_adapter"
echo "0x$domain" > "$uuid_path/assign_domain"
fi

{% endfor %}
28 changes: 27 additions & 1 deletion roles/create_compute_nodes/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,23 @@
---
- name: Load cex_hostdev_map from JSON
set_fact:
cex_hostdev_map: "{{ lookup('file', '/root/.cex_hostdev_map.json') | from_json }}"
when: cex_uuid_map is defined

- name: Debug rendered cex_hostdev per compute node
debug:
msg: "VM: {{ vm_name }}, Hostdev: {{ cex_hostdev }}"
vars:
vm_name: "{{ env.cluster.nodes.compute.vm_name[i] }}"
cex_hostdev: >-
{% if cex and cex_device is defined and cex_hostdev_map is defined and vm_name in cex_hostdev_map %}
--hostdev={{ cex_hostdev_map[vm_name] }}
{% else %}
""
{% endif %}
with_sequence: start=0 end={{ (env.cluster.nodes.compute.hostname | length) - 1 }}
loop_control:
index_var: i

- name: 'Include matching lpar yml file'
tags: create_teuthology_node
Expand All @@ -7,6 +26,12 @@

- name: Install CoreOS on compute nodes
tags: create_compute_nodes
vars:
vm_name: "{{ env.cluster.nodes.compute.vm_name[i] }}"
cex_hostdev: >-
{% if cex and cex_device is defined and cex_hostdev_map is defined and vm_name in cex_hostdev_map %}
--hostdev={{ cex_hostdev_map[vm_name] }}
{% endif %}
shell: |
virsh destroy {{ env.cluster.nodes.compute.vm_name[i] }} || true
virsh undefine {{ env.cluster.nodes.compute.vm_name[i] }} --remove-all-storage --nvram || true
Expand Down Expand Up @@ -35,7 +60,8 @@
--memballoon none \
--graphics none \
--wait=-1 \
--noautoconsole
--noautoconsole \
{{ cex_hostdev }}
timeout: 360
with_sequence: start=0 end={{ (env.cluster.nodes.compute.hostname | length) - 1 }} stride=1
loop_control:
Expand All @@ -45,7 +71,7 @@

- name: Install CoreOS on infra nodes
tags: create_compute_nodes
shell: |

Check warning on line 74 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: virt-install --name {{ env.cluster.nodes.infra.vm_name[i] }} --osinfo detect=on,name={{ ('rhel8.6') if rhcos_os_variant is not defined else (rhcos_os_variant) }} --autostart --disk pool={{ env.cluster.networking.metadata_name }}-vdisk,size={{ env.cluster.nodes.infra.disk_size }},cache=none,io=native --ram {{ env.cluster.nodes.infra.ram }} {{ env.cluster.nodes.infra.vcpu_model_option }} --vcpus {{ env.cluster.nodes.infra.vcpu }} --network network={{ env.vnet_name }} --location {{ rhcos_download_url }},kernel={{ rhcos_live_kernel }},initrd={{ rhcos_live_initrd }} --extra-args "rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda" --extra-args "coreos.live.rootfs_url=http://{{ env.bastion.networking.ip }}:8080/bin/{{ rhcos_live_rootfs }}" --extra-args "ip={{ env.cluster.nodes.infra.ip[i] }}::{{ env.cluster.networking.gateway }}:{{ env.cluster.networking.subnetmask }}:{{ env.cluster.nodes.infra.hostname[i] }}.{{ env.cluster.networking.metadata_name }}.{{ env.cluster.networking.base_domain }}:{{ env.cluster.networking.interface }}:none:1500" --extra-args "{{ ('ip=[' + env.cluster.nodes.infra.ipv6[i] + ']::[' + env.cluster.networking.ipv6_gateway +']:' + env.cluster.networking.ipv6_prefix | string + '::' + env.cluster.networking.interface + ':none' ) if env.use_ipv6 == True else '' }}" --extra-args "nameserver={{ env.cluster.networking.nameserver1 }}" --extra-args "{{ ('nameserver=' + env.cluster.networking.nameserver2) if env.cluster.networking.nameserver2 is defined else '' }}" --extra-args "coreos.inst.ignition_url=http://{{ env.bastion.networking.ip }}:8080/ignition/worker.ign" --extra-args "{{ _vm_console }}" --memballoon none --graphics none --wait=-1 --noautoconsole
virt-install \
--name {{ env.cluster.nodes.infra.vm_name[i] }} \
--osinfo detect=on,name={{ ('rhel8.6') if rhcos_os_variant is not defined else (rhcos_os_variant) }} \
Expand All @@ -68,7 +94,7 @@
--graphics none \
--wait=-1 \
--noautoconsole
with_sequence: start=0 end={{ ( env.cluster.nodes.infra.hostname | length ) - 1}} stride=1

Check warning on line 97 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: start=0 end={{ ( env.cluster.nodes.infra.hostname | length ) - 1}} stride=1 -> start=0 end={{ (env.cluster.nodes.infra.hostname | length) - 1 }} stride=1
loop_control:
extended: yes
index_var: i
Expand All @@ -79,16 +105,16 @@
- name: Split information from compute nodes into groups. The number of groups being equal to the number of KVM hosts there are.
tags: create_compute_nodes
set_fact:
compute_name: "{{ env.cluster.nodes.compute.vm_name[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"

Check warning on line 108 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ env.cluster.nodes.compute.vm_name[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }} -> {{ env.cluster.nodes.compute.vm_name[groups['kvm_host'].index(inventory_hostname) :: groups['kvm_host'] | length] }}
compute_hostname: "{{ env.cluster.nodes.compute.hostname[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"

Check warning on line 109 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ env.cluster.nodes.compute.hostname[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }} -> {{ env.cluster.nodes.compute.hostname[groups['kvm_host'].index(inventory_hostname) :: groups['kvm_host'] | length] }}
compute_ip: "{{ env.cluster.nodes.compute.ip[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"

Check warning on line 110 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ env.cluster.nodes.compute.ip[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }} -> {{ env.cluster.nodes.compute.ip[groups['kvm_host'].index(inventory_hostname) :: groups['kvm_host'] | length] }}
compute_ipv6: "{{ env.cluster.nodes.compute.ipv6[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] if env.use_ipv6 == True else '' }}"

Check warning on line 111 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ env.cluster.nodes.compute.ipv6[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] if env.use_ipv6 == True else '' }} -> {{ env.cluster.nodes.compute.ipv6[groups['kvm_host'].index(inventory_hostname) :: groups['kvm_host'] | length] if env.use_ipv6 == True else '' }}
when: env.z.high_availability == True

- name: Split information for infra nodes into groups. The number of groups being equal to the number of KVM hosts there are.
tags: create_compute_nodes
set_fact:
infra_name: "{{ env.cluster.nodes.infra.vm_name[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"

Check warning on line 117 in roles/create_compute_nodes/tasks/main.yaml

View workflow job for this annotation

GitHub Actions / Ansible_Lint

jinja[spacing]

Jinja2 spacing could be improved: {{ env.cluster.nodes.infra.vm_name[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }} -> {{ env.cluster.nodes.infra.vm_name[groups['kvm_host'].index(inventory_hostname) :: groups['kvm_host'] | length] }}
infra_hostname: "{{ env.cluster.nodes.infra.hostname[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"
infra_ip: "{{ env.cluster.nodes.infra.ip[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] }}"
infra_ipv6: "{{ env.cluster.nodes.infra.ipv6[groups['kvm_host'].index(inventory_hostname)::groups['kvm_host'] | length] if env.use_ipv6 == True else '' }}"
Expand Down
30 changes: 29 additions & 1 deletion roles/create_control_nodes/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,34 @@
ansible.builtin.include_vars:
file: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}.yaml"

- name: Load cex_hostdev_map from JSON file
set_fact:
cex_hostdev_map: "{{ lookup('file', '/root/.cex_hostdev_map.json') | from_json }}"
when: cex_uuid_map is defined

- name: Debug rendered cex_hostdev per control node
debug:
msg: "VM: {{ vm_name }}, Hostdev: {{ cex_hostdev }}"
vars:
vm_name: "{{ env.cluster.nodes.control.vm_name[i] }}"
cex_hostdev: >-
{% if cex and cex_device is defined and cex_hostdev_map is defined and vm_name in cex_hostdev_map %}
--hostdev={{ cex_hostdev_map[vm_name] }}
{% else %}
""
{% endif %}
with_sequence: start=0 end={{ (env.cluster.nodes.control.hostname | length) - 1 }}
loop_control:
index_var: i

- name: Create CoreOS control nodes on the the KVM host.
tags: create_control_nodes
vars:
vm_name: "{{ env.cluster.nodes.control.vm_name[i] }}"
cex_hostdev: >-
{% if cex and cex_device is defined and cex_hostdev_map is defined and vm_name in cex_hostdev_map %}
--hostdev={{ cex_hostdev_map[vm_name] }}
{% endif %}
shell: |
virt-install \
--name {{ env.cluster.nodes.control.vm_name[i] }} \
Expand Down Expand Up @@ -34,7 +60,8 @@
--graphics none \
--console pty,target_type=serial \
--wait=-1 \
--noautoconsole
--noautoconsole \
{{ cex_hostdev }}
timeout: 360
with_sequence: start=0 end={{ (env.cluster.nodes.control.hostname | length) - 1 }} stride=1
loop_control:
Expand Down Expand Up @@ -72,6 +99,7 @@
--graphics none \
--wait=-1 \
--noautoconsole

when: env.z.high_availability == True and inventory_hostname == env.z.lpar1.hostname and env.cluster.nodes.control.vm_name[0] not in hosts_with_host_vars

- name: Create the second CoreOS control node on the first KVM host, if cluster is to be highly available.
Expand Down
15 changes: 15 additions & 0 deletions roles/get_ocp/defaults/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,18 @@ use_proxy: false
proxy_http:
proxy_https:
proxy_no:

# (Optional) CEX Ignition specific
# Default mappings based on cex_device
output_dir: "/tmp"
butane_default:
version: 4.19.0
dasd:
layout: s390x-eckd
device: /dev/dasd
virt:
layout: s390x-virt
device: /dev/disk/by-partlabel/root
fcp:
layout: s390x-fcp # or s390x-fcp if needed
device: /dev/disk/by-label/root
43 changes: 43 additions & 0 deletions roles/get_ocp/tasks/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,49 @@
- .openshift_install.log
- .openshift_install_state.json

- name: Generate Butane Ignition configs if CEX device is defined
tags: get_ocp
become: true
block:
- name: Generate Butane file for nodes
template:
src: cex-butane-machineconfig.bu.j2
dest: "{{ output_dir }}/99-{{ node_role }}-s390x-cex-luks-config.bu"
loop:
- master
- worker
loop_control:
loop_var: node_role
vars:
luks_device: "{{ butane_default[cex_device].device }}"
layout: "{{ butane_default[cex_device].layout }}"

- name: Download Butane binary for s390x
get_url:
url: https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-s390x
dest: /usr/local/bin/butane
mode: '0755'

- name: Convert Butane YAML to Ignition for each role
shell: |
/usr/local/bin/butane "{{ output_dir }}/99-{{ item }}-s390x-cex-luks-config.bu" \
-o "{{ output_dir }}/99-{{ item }}-s390x-cex-luks-config.yaml"
loop:
- master
- worker

- name: Copy generated ignition files to final directory
copy:
src: "{{ output_dir }}/99-{{ item }}-s390x-cex-luks-config.yaml"
dest: "/root/ocpinst/openshift/99-{{ item }}-s390x-cex-luks-config.yaml"
remote_src: yes
mode: '0644'
loop:
- master
- worker
when:
- cex | bool

- name: Set ownership of ocpinst directory contents to root
tags: get_ocp
become: true
Expand Down
35 changes: 35 additions & 0 deletions roles/get_ocp/templates/cex-butane-machineconfig.bu.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
variant: openshift
version: {{ butane_default["version"] }}
metadata:
name: {{ node_role }}-luks-storage
labels:
machineconfiguration.openshift.io/role: {{ node_role }}

openshift:
fips: true
kernel_arguments:
- rd.luks.key=/etc/luks/cex.key

{% if cex_device in ['dasd', 'virt'] %}
boot_device:
layout: {{ butane_default[cex_device].layout }}
luks:
device: {{ butane_default[cex_device].device }}
cex:
enabled: true

{% elif cex_device == 'fcp' %}
storage:
filesystems:
- device: /dev/mapper/root
format: xfs
label: root
wipe_filesystem: true
luks:
- cex:
enabled: true
device: {{ butane_default[cex_device].device }}
label: luks-root
name: root
wipe_volume: true
{% endif %}
Loading