|
| 1 | +====== |
| 2 | +Ironic |
| 3 | +====== |
| 4 | + |
| 5 | +Ironic networking |
| 6 | +================= |
| 7 | + |
| 8 | +Ironic will require the workload provisioning and cleaning networks to be |
| 9 | +configured in ``networks.yml`` |
| 10 | + |
| 11 | +The workload provisioning network will require an allocation pool for |
| 12 | +Ironic inspection and for Neutron, an example configuration is shown |
| 13 | +below. |
| 14 | + |
| 15 | +.. code-block:: yaml |
| 16 | +
|
| 17 | + # Workload provisioning network IP information. |
| 18 | + provision_wl_net_cidr: "172.0.0.0/16" |
| 19 | + provision_wl_net_allocation_pool_start: "172.0.0.4" |
| 20 | + provision_wl_net_allocation_pool_end: "172.0.0.6" |
| 21 | + provision_wl_net_inspection_allocation_pool_start: "172.0.1.4" |
| 22 | + provision_wl_net_inspection_allocation_pool_end: "172.0.1.250" |
| 23 | + provision_wl_net_neutron_allocation_pool_start: "172.0.2.4" |
| 24 | + provision_wl_net_neutron_allocation_pool_end: "172.0.2.250" |
| 25 | + provision_wl_net_neutron_gateway: "172.0.1.1" |
| 26 | +
|
| 27 | +The cleaning network will also require a Neutron allocation pool. |
| 28 | + |
| 29 | +.. code-block:: yaml |
| 30 | +
|
| 31 | + # Cleaning network IP information. |
| 32 | + cleaning_net_cidr: "172.1.0.0/16" |
| 33 | + cleaning_net_allocation_pool_start: "172.1.0.4" |
| 34 | + cleaning_net_allocation_pool_end: "172.1.0.6" |
| 35 | + cleaning_net_neutron_allocation_pool_start: "172.1.2.4" |
| 36 | + cleaning_net_neutron_allocation_pool_end: "172.1.2.250" |
| 37 | + cleaning_net_neutron_gateway: "172.1.0.1" |
| 38 | +
|
| 39 | +OpenStack Config |
| 40 | +================ |
| 41 | + |
| 42 | +Overcloud Ironic will require a router to exist between the internal API |
| 43 | +network and the provision workload network, a way to achieve this is by |
| 44 | +using `OpenStack Config <https://github.com/stackhpc/openstack-config>` |
| 45 | +to define the internal API network in Neutron and set up a router with |
| 46 | +a gateway. |
| 47 | + |
| 48 | +It not necessary to define the provision and cleaning networks in this |
| 49 | +configuration as they will be generated during |
| 50 | + |
| 51 | +.. code-block:: console |
| 52 | +
|
| 53 | + kayobe overcloud post configure |
| 54 | +
|
| 55 | +The openstack config file could resemble the network, subnet and router |
| 56 | +configuration shown below: |
| 57 | + |
| 58 | +.. code-block:: yaml |
| 59 | +
|
| 60 | + networks: |
| 61 | + - "{{ openstack_network_intenral }}" |
| 62 | + openstack_network_internal: |
| 63 | + name: "internal-net" |
| 64 | + project: "admin" |
| 65 | + provider_network_type: "vlan" |
| 66 | + provider_physical_network: "physnet1" |
| 67 | + provider_segmentation_id: 458 |
| 68 | + shared: false |
| 69 | + external: true |
| 70 | +
|
| 71 | + subnets: |
| 72 | + - "{{ openstack_subnet_internal }}" |
| 73 | + openstack_subnet_internal: |
| 74 | + name: "internal-net" |
| 75 | + project: "admin" |
| 76 | + cidr: "10.10.3.0/24" |
| 77 | + enable_dhcp: true |
| 78 | + allocation_pool_start: "10.10.3.3" |
| 79 | + allocation_pool_end: "10.10.3.3" |
| 80 | +
|
| 81 | + openstack_routers: |
| 82 | + - "{{ openstack_router_ironic }}" |
| 83 | +
|
| 84 | + openstack_router_ironic: |
| 85 | + - name: ironic |
| 86 | + project: admin |
| 87 | + interfaces: |
| 88 | + - net: "provision-net" |
| 89 | + subnet: "provision-net" |
| 90 | + portip: "172.0.1.1" |
| 91 | + - net: "cleaning-net" |
| 92 | + subnet: "cleaning-net" |
| 93 | + portip: "172.1.0.1" |
| 94 | + network: internal-net |
| 95 | +
|
| 96 | +To provision baremetal nodes in Nova you will also require setting a flavour |
| 97 | +speciifc to that type of baremetal host. You will need to replace the custom |
| 98 | +resource ``resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>`` placeholder with |
| 99 | +the resource class of your baremetal hosts, you will also need this later when |
| 100 | +configuring the baremetal-compute inventory. |
| 101 | + |
| 102 | +.. code-block:: yaml |
| 103 | +
|
| 104 | + openstack_flavors: |
| 105 | + - "{{ openstack_flavor_baremetal_A }}" |
| 106 | + # Bare metal compute node. |
| 107 | + openstack_flavor_baremetal_A: |
| 108 | + name: "baremetal-A" |
| 109 | + ram: 1048576 |
| 110 | + disk: 480 |
| 111 | + vcpus: 256 |
| 112 | + extra_specs: |
| 113 | + "resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>": 1 |
| 114 | + "resources:VCPU": 0 |
| 115 | + "resources:MEMORY_MB": 0 |
| 116 | + "resources:DISK_GB": 0 |
| 117 | +
|
| 118 | +Enabling conntrack |
| 119 | +================== |
| 120 | + |
| 121 | +UEFI booting requires conntrack_helper to be configured on the Ironic neutron |
| 122 | +router, this is due to TFTP traffic being dropped due to being UDP. You will |
| 123 | +need to define some extension drivers in ``neutron.yml`` to ensure conntrack is |
| 124 | +enabled in neutron server. |
| 125 | + |
| 126 | +.. code-block:: yaml |
| 127 | +
|
| 128 | + kolla_neutron_ml2_extension_drivers: |
| 129 | + port_security |
| 130 | + conntrack_helper |
| 131 | + dns_domain_ports |
| 132 | +
|
| 133 | +The neutron l3 agent also requires conntrack to be set as an extension in |
| 134 | +``kolla/config/neutron/l3_agent.ini`` |
| 135 | + |
| 136 | +.. code-block:: ini |
| 137 | +
|
| 138 | + [agent] |
| 139 | + extensions = conntrack_helper |
| 140 | +
|
| 141 | +It is also required to load the conntrack kernel module ``nf_nat_tftp``, |
| 142 | +``nf_conntrack`` and ``nf_conntrack_tftp`` on network nodes. You can load these |
| 143 | +modules using modprobe or define these in /etc/module-load. |
| 144 | + |
| 145 | +The Ironic neutron router will also need to be configured to use |
| 146 | +conntrack_helper. |
| 147 | + |
| 148 | +.. code-block:: json |
| 149 | +
|
| 150 | + "conntrack_helpers": { |
| 151 | + "protocol": "udp", |
| 152 | + "port": 69, |
| 153 | + "helper": "tftp" |
| 154 | + } |
| 155 | +
|
| 156 | +Currently it's not possible to add this helper via the OpenStack CLI, to add |
| 157 | +this to the Ironic router you will need to make a request to the API directly, |
| 158 | +for example via cURL. |
| 159 | + |
| 160 | +.. code-block:: console |
| 161 | +
|
| 162 | + curl -g -i -X POST \ |
| 163 | + http://<internal_api_vip>:9696/v2.0/routers/<ironic_router_uuid>/conntrack_helpers \ |
| 164 | + -H "Accept: application/json" \ |
| 165 | + -H "User-Agent: openstacksdk/2.0.0 keystoneauth1/5.4.0 python-requests/2.31.0 CPython/3.9.18" \ |
| 166 | + -H "X-Auth-Token: <issued_token>" \ |
| 167 | + -d '{ "conntrack_helper": {"helper": "tftp", "protocol": "udp", "port": 69 } }' |
| 168 | +
|
| 169 | +TFTP server |
| 170 | +=========== |
| 171 | + |
| 172 | +By default the Ironic TFTP server (ironic_pxe container) will call the UEFI |
| 173 | +boot file ``ipxe-x86_64.efi`` instead of ``ipxe.efi`` meaning no boot file will |
| 174 | +be sent during the PXE boot process in the default configuration. |
| 175 | + |
| 176 | +As of now this is solved by using a hack workaround by changing the boot file |
| 177 | +in the ``ironic_pxe`` container. To do this you will need to enter the |
| 178 | +container and rename the file manually. |
| 179 | + |
| 180 | +.. code-block:: console |
| 181 | +
|
| 182 | + docker exec ironic_pxe “mv /tftpboot/ipxe-x86_64.efi /tftpboot/ipxe.efi” |
| 183 | +
|
| 184 | +Baremetal inventory |
| 185 | +=================== |
| 186 | + |
| 187 | +To begin enrolling nodes you will need to define them in the hosts file. |
| 188 | + |
| 189 | +.. code-block:: ini |
| 190 | +
|
| 191 | + [r1] |
| 192 | + hv1 ipmi_address=10.1.28.16 |
| 193 | + hv2 ipmi_address=10.1.28.17 |
| 194 | + … |
| 195 | +
|
| 196 | + [r1:vars] |
| 197 | + ironic_driver=redfish |
| 198 | + resource_class=<your_resource_class> |
| 199 | + redfish_system_id=<your_redfish_systen_id> |
| 200 | + redfish_verify_ca=<your_redfish_verify_ca> |
| 201 | + redfish_username=<your_redfish_username> |
| 202 | + redfish_password=<your_redfish_password> |
| 203 | +
|
| 204 | + [baremetal-compute:children] |
| 205 | + r1 |
| 206 | +
|
| 207 | +The typical layout for baremetal nodes are separated by racks, for instance |
| 208 | +in rack 1 we have the following configuration set up where the BMC addresses |
| 209 | +are defined for all nodes, and Redfish information such as username, passwords |
| 210 | +and the system ID are defined for the rack as a whole. |
| 211 | + |
| 212 | +You can add more racks to the deployment by replicating the rack 1 example and |
| 213 | +adding that as an entry to the baremetal-compute group. |
| 214 | + |
| 215 | +Node enrollment |
| 216 | +=============== |
| 217 | + |
| 218 | +When nodes are defined in the inventory you can begin enrolling them by |
| 219 | +invoking the Kayobe commmand (Note that only the Redfish driver is supported |
| 220 | +by this command) |
| 221 | + |
| 222 | +.. code-block:: console |
| 223 | +
|
| 224 | + kayobe baremetal compute register |
| 225 | +
|
| 226 | +Following registration, the baremetal nodes can be inspected and made |
| 227 | +available for provisioning by Nova via the Kayobe commands |
| 228 | + |
| 229 | +.. code-block:: console |
| 230 | +
|
| 231 | + kayobe baremetal compute inspect |
| 232 | + kayobe baremetal compute provide |
0 commit comments