You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generate an SSH keypair. The public key will be registered in OpenStack as a
66
+
keypair and authorised by the instances deployed by Terraform. The private and
67
+
public keys will be transferred to the Ansible control host to allow it to
68
+
connect to the other hosts. Note that password-protected keys are not currently
69
+
supported.
40
70
41
71
.. code-block:: console
42
72
@@ -94,56 +124,51 @@ Or you can source the provided `init.sh` script which shall initialise terraform
94
124
OpenStack Cloud Name: sms-lab
95
125
Password:
96
126
97
-
Generate Terraform variables:
127
+
You must ensure that you have `Ansible installed <https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`_ on your local machine.
If the deployed instances are behind an SSH bastion you must ensure that your SSH config is setup appropriately with a proxy jump.
123
140
124
-
storage_count = "3"
125
-
storage_flavor = "general.v1.small"
126
-
storage_disk_size = 100
141
+
.. code-block:: console
127
142
128
-
deploy_wazuh = true
129
-
infra_vm_flavor = "general.v1.small"
130
-
infra_vm_disk_size = 100
143
+
Host lab-bastion
144
+
HostName BastionIPAddr
145
+
User username
146
+
IdentityFile ~/.ssh/key
131
147
132
-
add_ansible_control_fip = false
133
-
ansible_control_fip_pool = ""
134
-
EOF
148
+
Host 10.*
149
+
ProxyJump=lab-bastion
150
+
ForwardAgent no
151
+
IdentityFile ~/.ssh/key
152
+
UserKnownHostsFile /dev/null
153
+
StrictHostKeyChecking no
154
+
155
+
Configure Terraform variables
156
+
=============================
157
+
158
+
Populate Terraform variables in `terraform.tfvars`. Examples are provided in
159
+
files named `*.tfvars.example`.
135
160
136
161
You will need to set the `multinode_keypair`, `prefix`, and `ssh_public_key`.
137
162
By default, Rocky Linux 9 will be used but Ubuntu Jammy is also supported by
138
-
changing `multinode_image` to `Ubuntu-22.04-lvm` and `ssh_user` to `ubuntu`.
139
-
Other LVM images should also work but are untested.
163
+
changing `multinode_image` to `overcloud-ubuntu-jammy-<release>-<datetime>` and
164
+
`ssh_user` to `ubuntu`.
140
165
141
166
The `multinode_flavor` will change the flavor used for controller and compute
142
167
nodes. Both virtual machines and baremetal are supported, but the `*_disk_size`
143
168
variables must be set to 0 when using baremetal host. This will stop a block
144
169
device being allocated. When any baremetal hosts are deployed, the
145
170
`multinode_vm_network` and `multinode_vm_subnet` should also be changed to
146
-
`stackhpc-ipv4-vlan-v2` and `stackhpc-ipv4-vlan-subnet-v2` respectively.
171
+
a VLAN network and associated subnet.
147
172
148
173
If `deploy_wazuh` is set to true, an infrastructure VM will be created that
149
174
hosts the Wazuh manager. The Wazuh deployment playbooks will also be triggered
@@ -155,6 +180,17 @@ and attached to the Ansible control host. In that case
155
180
which to allocate the floating IP, and the floating IP will be used for SSH
156
181
access to the control host.
157
182
183
+
Configure Ansible variables
184
+
===========================
185
+
186
+
Review the vars defined within `ansible/vars/defaults.yml`. In here you can customise the version of kayobe, kayobe-config or openstack-config.
187
+
Make sure to define `ssh_key_path` to point to the location of the SSH key in use by the nodes and also `vxlan_vni` which should be unique value between 1 to 100,000.
188
+
VNI should be much smaller than the officially supported limit of 16,777,215 as we encounter errors when attempting to bring interfaces up that use a high VNI.
189
+
You must set `vault_password_path`; this should be set to the path to a file containing the Ansible vault password.
190
+
191
+
Deployment
192
+
==========
193
+
158
194
Generate a plan:
159
195
160
196
.. code-block:: console
@@ -167,91 +203,63 @@ Apply the changes:
167
203
168
204
terraform apply -auto-approve
169
205
170
-
You should have requested a number of resources spawned on Openstack, and an ansible_inventory file produced as output for Kayobe.
171
-
172
-
Copy your generated id_rsa and id_rsa.pub to ~/.ssh/ on Ansible control host if you want Kayobe to automatically pick them up during bootstrap.
206
+
You should have requested a number of resources to be spawned on Openstack.
173
207
174
208
Configure Ansible control host
209
+
==============================
175
210
176
-
Using the `deploy-openstack-config.yml` playbook you can setup the Ansible control host to include the kayobe/kayobe-config repositories with `hosts` and `admin-oc-networks`.
177
-
It shall also setup the kayobe virtual environment, allowing for immediate configuration and deployment of OpenStack.
178
-
179
-
First you must ensure that you have `Ansible installed <https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`_ on your local machine.
211
+
Run the configure-hosts.yml playbook to configure the Ansible control host.
180
212
181
213
.. code-block:: console
182
214
183
-
pip install --user ansible
184
-
185
-
Secondly if the machines are behind an SSH bastion you must ensure that your ssh config is setup appropriately with a proxy jump
This playbook sequentially executes 2 other playbooks:
188
218
189
-
Host lab-bastion
190
-
HostName BastionIPAddr
191
-
User username
192
-
IdentityFile ~/.ssh/key
219
+
#. ``grow-control-host.yml`` - Applies LVM configuration to the control host to ensure it has enough space to continue with the rest of the deployment. Tag: ``lvm``
220
+
#. ``deploy-openstack-config.yml`` - Prepares the Ansible control host as a Kayobe control host, cloning the Kayobe configuration and installing virtual environments. Tag: ``deploy``
193
221
194
-
Host 10.*
195
-
ProxyJump=lab-bastion
196
-
ForwardAgent no
197
-
IdentityFile ~/.ssh/key
198
-
UserKnownHostsFile /dev/null
199
-
StrictHostKeyChecking no
222
+
These playbooks are tagged so that they can be invoked or skipped using `tags` or `--skip-tags` as required.
200
223
201
-
Install the ansible requirements.
224
+
Deploy OpenStack
225
+
================
202
226
203
-
.. code-block:: console
227
+
Once the Ansible control host has been configured with a Kayobe/OpenStack configuration you can then begin the process of deploying OpenStack.
228
+
This can be achieved by either manually running the various commands to configure the hosts and deploy the services or automated by using the generated `deploy-openstack.sh` script.
229
+
`deploy-openstack.sh` should be available within the home directory on your Ansible control host provided you ran `deploy-openstack-config.yml` earlier.
230
+
This script will go through the process of performing the following tasks:
Review the vars defined within `ansible/vars/defaults.yml`. In here you can customise the version of kayobe, kayobe-config or openstack-config.
208
-
However, make sure to define `ssh_key_path` to point to the location of the SSH key in use amongst the nodes and also `vxlan_vni` which should be unique value between 1 to 100,000.
209
-
VNI should be much smaller than the officially supported limit of 16,777,215 as we encounter errors when attempting to bring interfaces up that use a high VNI. You must set``vault_password_path``; this should be set to the path to a file containing the Ansible vault password.
240
+
Tempest test results will be written to ~/tempest-artifacts.
210
241
211
-
Finally, run the configure-hosts playbook.
242
+
If you choose to opt for automated method you must first SSH into your Ansible control host.
This playbook sequentially executes 2 other playbooks:
218
-
219
-
#. ``grow-control-host.yml`` - Applies LVM configuration to the control host to ensure it has enough space to continue with the rest of the deployment. Tag: ``lvm``
220
-
#. ``deploy-openstack-config.yml`` - Deploys the OpenStack configuration to the control host. Tag: ``deploy``
These playbooks are tagged so that they can be invoked or skipped as required. For example, if designate is not being deployed, some time can be saved by skipping the FQDN playbook:
249
+
Start a `tmux` session to avoid halting the deployment if you are disconnected.
Once the Ansible control host has been configured with a Kayobe/OpenStack configuration you can then begin the process of deploying OpenStack.
232
-
This can be achieved by either manually running the various commands to configures the hosts and deploy the services or automated by using `deploy-openstack.sh`,
233
-
which should be available within the homedir on your Ansible control host provided you ran `deploy-openstack-config.yml` earlier.
234
-
235
-
If you choose to opt for automated method you must first SSH into your Ansible control host and then run the `deploy-openstack.sh` script
This script will go through the process of performing the following tasks
243
-
* kayobe control host bootstrap
244
-
* kayobe seed host configure
245
-
* kayobe overcloud host configure
246
-
* cephadm deployment
247
-
* kayobe overcloud service deploy
248
-
* openstack configuration
249
-
* tempest testing
250
-
251
-
Tempest test results are written to ~/tempest-artifacts.
252
-
253
261
Accessing OpenStack
254
-
-------------------
262
+
===================
255
263
256
264
After a successful deployment of OpenStack you make access the OpenStack API and Horizon by proxying your connection via the seed node, as it has an interface on the public network (192.168.39.X).
257
265
Using software such as sshuttle will allow for easy access.
@@ -268,15 +276,15 @@ Important to node this will proxy all DNS requests from your machine to the firs
After you are finished with the multinode environment please destroy the nodes to free up resources for others.
274
282
This can acomplished by using the provided `scripts/tear-down.sh` which will destroy your controllers, compute, seed and storage nodes whilst leaving your Ansible control host and keypair intact.
275
283
276
284
If you would like to delete your Ansible control host then you can pass the `-a` flag however if you would also like to remove your keypair then pass `-a -k`
277
285
278
286
Issues & Fixes
279
-
--------------
287
+
==============
280
288
281
289
Sometimes a compute instance fails to be provisioned by Terraform or fails on boot for any reason.
282
290
If this happens the solution is to mark the resource as tainted and perform terraform apply again which shall destroy and rebuild the failed instance.
0 commit comments