@@ -46,7 +46,7 @@ The cleaning network will also require a Neutron allocation pool.
46
46
OpenStack Config
47
47
================
48
48
49
- Overcloud Ironic will be deployed with a listening TFTP server on the
49
+ Overcloud Ironic is deployed with a listening TFTP server on the
50
50
control plane which will provide baremetal nodes that PXE boot with the
51
51
Ironic Python Agent (IPA) kernel and ramdisk. Since the TFTP server is
52
52
listening exclusively on the internal API network it's neccessary for a
@@ -55,13 +55,13 @@ API network, we can achieve this is by defining a Neutron router using
55
55
`OpenStack Config <https://github.com/stackhpc/openstack-config> `.
56
56
57
57
It not necessary to define the provision and cleaning networks in this
58
- configuration as they will be generated during
58
+ configuration as this is generated during
59
59
60
60
.. code-block :: console
61
61
62
62
kayobe overcloud post configure
63
63
64
- The openstack config file could resemble the network, subnet and router
64
+ The OpenStack config file could resemble the network, subnet and router
65
65
configuration shown below:
66
66
67
67
.. code-block :: yaml
@@ -129,10 +129,10 @@ configuring the baremetal-compute inventory.
129
129
Enabling conntrack (ML2/OVS only)
130
130
=================================
131
131
132
- Conntrack_helper will be required when UEFI booting on a cloud with ML2/OVS
132
+ Conntrack_helper is required when UEFI booting on a cloud with ML2/OVS
133
133
and using the iptables firewall_driver, otherwise TFTP traffic is dropped due
134
134
to it being UDP. You will need to define some extension drivers in ``neutron.yml ``
135
- to ensure conntrack is enabled in neutron server.
135
+ to ensure conntrack is enabled in Neutron server.
136
136
137
137
.. code-block :: yaml
138
138
@@ -141,20 +141,20 @@ to ensure conntrack is enabled in neutron server.
141
141
conntrack_helper
142
142
dns_domain_ports
143
143
144
- The neutron l3 agent also requires conntrack to be set as an extension in
144
+ The Neutron l3 agent also requires conntrack to be set as an extension in
145
145
``kolla/config/neutron/l3_agent.ini ``
146
146
147
147
.. code-block :: ini
148
148
149
149
[agent]
150
150
extensions = conntrack_helper
151
151
152
- It is also required to load the conntrack kernel module ``nf_nat_tftp ``,
153
- `` nf_conntrack `` and ``nf_conntrack_tftp `` on network nodes. You can load these
154
- modules using modprobe or define these in /etc/module-load.
152
+ The conntrack kernel modules ``nf_nat_tftp ``, `` nf_conntrack ``,
153
+ and ``nf_conntrack_tftp `` are also required on network nodes. You
154
+ can load these modules using modprobe or define these in /etc/module-load.
155
155
156
- The Ironic neutron router will also need to be configured to use
157
- conntrack_helper.
156
+ The Ironic Neutron router will also need to be configured to use
157
+ `` conntrack_helper `` .
158
158
159
159
.. code-block :: json
160
160
@@ -164,7 +164,7 @@ conntrack_helper.
164
164
"helper" : " tftp"
165
165
}
166
166
167
- To add the conntrack_helper to the neutron router, you can use the openstack
167
+ To add the conntrack_helper to the Neutron router, you can use the OpenStack
168
168
CLI
169
169
170
170
.. code-block :: console
@@ -180,15 +180,15 @@ Baremetal inventory
180
180
181
181
The baremetal inventory is constructed with three different group types.
182
182
The first group is the default baremetal compute group for Kayobe called
183
- [baremetal-compute] and will contain all baremetal nodes including tenant
184
- and hypervisor nodes. This group acts as a parent for all baremetal nodes
185
- and config that can be shared between all baremetal nodes will be defined
186
- here.
183
+ `` [baremetal-compute] `` and will contain all baremetal nodes including
184
+ baremetal-compute (tenant) nodes and hypervisor nodes. This group acts as
185
+ a parent for all baremetal nodes and config that is shared between all
186
+ baremetal nodes is defined here.
187
187
188
188
We will need to create a Kayobe group_vars file for the baremetal-compute
189
189
group that contains all the variables we want to define for the group. We
190
190
can put all these variables in the inventory in
191
- ‘inventory/group_vars/baremetal-compute/ironic-vars’ The ironic_driver_info
191
+ `` ‘inventory/group_vars/baremetal-compute/ironic-vars’ `` The ironic_driver_info
192
192
template dict contains all variables to be templated into the driver_info
193
193
property in Ironic. This includes the BMC address, username, password,
194
194
IPA configuration etc. We also currently define the ironic_driver here as
@@ -214,21 +214,21 @@ all nodes currently use the Redfish driver.
214
214
ironic_redfish_password : " {{ inspector_redfish_password }}"
215
215
ironic_capabilities : " boot_option:local,boot_mode:uefi"
216
216
217
- The second group type will be the hardware type that a baremetal node belongs
218
- to, These variables will be in the inventory too in ‘inventory/group_vars/
217
+ The second group type is the hardware type that a baremetal node belongs
218
+ to, These variables are in the inventory in ‘inventory/group_vars/
219
219
baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>’
220
220
221
221
Specific variables to the hardware type include the resource_class which is
222
222
used to associate the hardware type to the flavour in Nova we defined earlier
223
- in Openstack Config.
223
+ in OpenStack Config.
224
224
225
225
.. code-block :: yaml
226
226
227
227
ironic_resource_class : " example_resource_class"
228
228
ironic_redfish_system_id : " example_system_id"
229
229
ironic_redfish_verify_ca : " {{ inspector_rule_var_redfish_verify_ca }}"
230
230
231
- The third group type will be the rack where the node is installed. This is the
231
+ The third group type is the rack where the node is installed. This is the
232
232
group in which the rack specific networking configuration is defined here and
233
233
where the BMC address is entered as a host variable for each baremetal node.
234
234
Nodes can now be entered directly into the hosts file as part of this group.
@@ -262,34 +262,34 @@ invoking the Kayobe commmand
262
262
263
263
.. code-block :: console
264
264
265
- (kayobe) $ kayobe baremetal compute register
265
+ kayobe baremetal compute register
266
266
267
267
All nodes that were not defined in Ironic previously should’ve been enrolled
268
268
following this playbook and should now be in ‘manageable’ state if Ironic was
269
269
able to reach the BMC of the node. We will need to inspect the baremetal nodes
270
270
to gather information about their hardware to prepare for deployment. Kayobe
271
- provides an inspection workflow and can be run using:
271
+ provides an inspection command and can be run using:
272
272
273
273
.. code-block :: console
274
274
275
- (kayobe) $ kayobe baremetal compute inspect
275
+ kayobe baremetal compute inspect
276
276
277
277
Inspection would require PXE booting the nodes into IPA. If the nodes were able
278
278
to PXE boot properly they would now be in ‘manageable’ state again. If an error
279
279
developed during PXE booting, the nodes will now be in ‘inspect failed’ state
280
280
and issues preventing the node from booting or returning introspection data
281
281
will need to be resolved before continuing. If the nodes did inspect properly,
282
- they can be cleaned and made available to Nova by running the provide workflow .
282
+ they can be cleaned and made available to Nova by running the provide command .
283
283
284
284
.. code-block :: console
285
285
286
- (kayobe) $ kayobe baremetal compute provide
286
+ kayobe baremetal compute provide
287
287
288
288
Baremetal hypervisors
289
289
=====================
290
290
291
291
Nodes that will not be dedicated as baremetal tenant nodes can be converted
292
- into hypervisors as required. StackHPC Kayobe configuration provides a workflow
292
+ into hypervisors as required. StackHPC Kayobe configuration provides a command
293
293
to provision baremetal tenants with the purpose of converted these nodes to
294
294
hypervisors. To begin the process of converting nodes we will need to define a
295
295
child group of the rack which will contain baremetal nodes dedicated to compute
@@ -314,10 +314,10 @@ hosts.
314
314
rack1-compute
315
315
316
316
The rack1-compute group as shown above is also associated with the Kayobe
317
- compute group in order for Kayobe to run the compute Kolla workflows on these
318
- nodes during service deployment.
317
+ compute group in order for Kayobe to deploy compute services during Kolla
318
+ service deployment.
319
319
320
- You will also need to setup the Kayobe network configuration for the rack1
320
+ You will also need to set up the Kayobe network configuration for the rack1
321
321
group. In networks.yml you should create an admin network for the rack1 group,
322
322
this should consist of the correct CIDR for the rack being deployed.
323
323
The configuration should resemble below in networks.yml:
@@ -328,7 +328,7 @@ The configuration should resemble below in networks.yml:
328
328
physical_rack1_admin_oc_net_gateway : “172.16.208.129”
329
329
physical_rack1_admin_net_defroute : true
330
330
331
- You will also need to configure a neutron network for racks to deploy instances
331
+ You will also need to configure a Neutron network for racks to deploy instances
332
332
on, we can configure this in openstack-config as before. We will need to define
333
333
this network and associate a subnet for it for each rack we want to enroll in
334
334
Ironic.
@@ -356,8 +356,8 @@ Ironic.
356
356
allocation_pool_end : " 172.16.208.130"
357
357
358
358
The subnet configuration largely resembles the Kayobe network configuration,
359
- however we do not need to define an allocation pool or enable dhcp as we will
360
- be associating neutron ports with our hypervisor instances per IP address to
359
+ however we do not need to define an allocation pool or enable DHCP as we will
360
+ be associating Neutron ports with our hypervisor instances per IP address to
361
361
ensure they match up properly.
362
362
363
363
Now we should ensure the network interfaces are properly configured for the
@@ -379,9 +379,9 @@ for rack1 and the kayobe internal API network and be defined in the group_vars.
379
379
internal_net_interface : " br0.{{ internal_net_vlan }}"
380
380
381
381
We should also ensure some variables are configured properly for our group,
382
- such as the hypervisor image. These variables can be defined anywhere in
383
- group_vars, we can place them in the ironic-vars file we used before for
384
- baremetal node registration.
382
+ such as the hypervisor image. These variables can be defined in group_vars,
383
+ we can place them in the ironic-vars file we used before for baremetal node
384
+ registration.
385
385
386
386
.. code-block :: yaml
387
387
@@ -397,7 +397,7 @@ baremetal node registration.
397
397
project_name : " {{ lookup('env', 'OS_PROJECT_NAME') }}"
398
398
399
399
With these variables defined we can now begin deploying the baremetal nodes as
400
- instances, to begin we invoke the deploy-baremetal-instance ansible playbook.
400
+ instances, to begin we invoke the deploy-baremetal-instance Ansible playbook.
401
401
402
402
.. code-block :: console
403
403
@@ -418,48 +418,43 @@ Neutron port configured with the address of the baremetal node admin network.
418
418
The baremetal hypervisors will then be imaged and deployed associated with that
419
419
Neutron port. You should ensure that all nodes are correctly associated with
420
420
the right baremetal instance, you can do this by running a baremetal node show
421
- on any given hypervisor node and comparing the server uuid to the metadata on
421
+ on any given hypervisor node and comparing the server UUID to the metadata on
422
422
the Nova instance.
423
423
424
424
Once the nodes are deployed, we can use Kayobe to configure them as compute
425
- hosts, running kayobe overcloud host configure on these nodes will ensure that
426
- all networking, package and various other host configurations are setup
425
+ hosts. More information about Kayobe host configuration is available in the
426
+ :kayobe-doc: ` upstream Kayobe documentation <configuration/reference/hosts.html> `.
427
427
428
428
.. code-block :: console
429
429
430
430
kayobe overcloud host configure --limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
431
431
432
432
Following host configuration we can begin deploying OpenStack services to the
433
- baremetal hypervisors by invoking kayobe overcloud service deploy. Nova
434
- services will be deployed to the baremetal hosts.
435
-
436
- .. code-block :: console
437
-
438
- kayobe overcloud service deploy --kolla-limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
433
+ baremetal hypervisors by invoking `kayobe overcloud service deploy `.
439
434
440
435
Un-enrolling hypervisors
441
436
========================
442
437
443
438
To convert baremetal hypervisors into regular baremetal compute instances you
444
- will need to drain the hypervisor of all running compute instances, you should
445
- first invoke the nova-compute-disable playbook to ensure all Nova services on
439
+ will need to drain the hypervisor of all running compute instances, First invoke
440
+ the `` nova-compute-disable.yml `` Ansible playbook to ensure all Nova services on
446
441
the baremetal node are disabled and compute instances will not be allocated to
447
442
this node.
448
443
449
444
.. code-block :: console
450
445
451
- (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
446
+ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
452
447
453
448
Now the Nova services are disabled you should also ensure any existing compute
454
449
instances are moved elsewhere by invoking the nova-compute-drain playbook
455
450
456
451
.. code-block :: console
457
452
458
- (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-drain.yml
453
+ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-drain.yml
459
454
460
- Now the node has no instances allocated to it you can delete the instance using
461
- the OpenStack CLI and the node will be moved back to ``available `` state.
455
+ Now the node has no instances allocated to it you can delete the baremetal instance
456
+ using the OpenStack CLI and the node is moved back to ``available `` state.
462
457
463
458
.. code-block :: console
464
459
465
- (os-venv) $ openstack server delete ...
460
+ openstack server delete ...
0 commit comments