Skip to content

Commit 9a04702

Browse files
authored
Merge pull request #53 from stackhpc/update-2024
Remove generic docs and update contents
2 parents 5506852 + 0988cd9 commit 9a04702

23 files changed

+392
-3334
lines changed

source/access_to_services.rst

Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
.. include:: vars.rst
2+
3+
==================
4+
Access to Services
5+
==================
6+
7+
Openstack Services
8+
==================
9+
10+
Accessing to Horizon
11+
--------------------
12+
13+
The OpenStack web UI is available at: |horizon_url|
14+
15+
This site is accessible |horizon_access|.
16+
17+
Accessing the OpenStack CLI
18+
---------------------------
19+
20+
A simple way to get started with accessing the OpenStack command-line
21+
interface.
22+
23+
This can be done from |public_api_access_host| (for example), or any machine
24+
that has access to |public_vip|:
25+
26+
.. code-block:: console
27+
28+
openstack# python3 -m venv openstack-venv
29+
openstack# source openstack-venv/bin/activate
30+
openstack# pip install -U pip
31+
openstack# pip install python-openstackclient
32+
openstack# source <project>-openrc.sh
33+
34+
The `<project>-openrc.sh` file can be downloaded from the OpenStack Dashboard
35+
(Horizon):
36+
37+
.. image:: _static/openrc.png
38+
:alt: Downloading an openrc file from Horizon
39+
:class: no-scaled-link
40+
:width: 200
41+
42+
Now it should be possible to run OpenStack commands:
43+
44+
.. code-block:: console
45+
46+
openstack# openstack server list
47+
48+
Accessing Deployed Instances
49+
----------------------------
50+
51+
The external network of OpenStack, called |public_network|, connects to the
52+
subnet |public_subnet|. This network is accessible |floating_ip_access|.
53+
54+
Any OpenStack instance can make outgoing connections to this network, via a
55+
router that connects the internal network of the project to the
56+
|public_network| network.
57+
58+
To enable incoming connections (e.g. SSH), a floating IP is required. A
59+
floating IP is allocated and associated via OpenStack. Security groups must be
60+
set to permit the kind of connectivity required (i.e. to define the ports that
61+
must be opened).
62+
63+
Monitoring Services
64+
===================
65+
66+
Access to Opensearch Dashboard
67+
------------------------------
68+
69+
OpenStack control plane logs are aggregated from all servers by Fluentd and
70+
stored in OpenSearch. The control plane logs can be accessed from
71+
OpenSearch using Opensearch Dashboard, which is available at the following URL:
72+
|opensearch_dashboard_url|
73+
74+
To log in, use the ``opensearch`` user. The password is auto-generated by
75+
Kolla-Ansible and can be extracted from the encrypted passwords file
76+
(|kolla_passwords|):
77+
78+
.. code-block:: console
79+
:substitutions:
80+
81+
kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^opensearch
82+
83+
Access to Grafana
84+
-----------------
85+
86+
Control plane metrics can be visualised in Grafana dashboards. Grafana can be
87+
found at the following address: |grafana_url|
88+
89+
To log in, use the |grafana_username| user. The password is auto-generated by
90+
Kolla-Ansible and can be extracted from the encrypted passwords file
91+
(|kolla_passwords|):
92+
93+
.. code-block:: console
94+
:substitutions:
95+
96+
kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^grafana_admin_password
97+
98+
Access to Prometheus Alertmanager
99+
---------------------------------
100+
101+
Control plane alerts can be visualised and managed in Alertmanager, which can
102+
be found at the following address: |alertmanager_url|
103+
104+
To log in, use the ``admin`` user. The password is auto-generated by
105+
Kolla-Ansible and can be extracted from the encrypted passwords file
106+
(|kolla_passwords|):
107+
108+
.. code-block:: console
109+
:substitutions:
110+
111+
kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^prometheus_alertmanager_password
112+
113+
114+
.. ifconfig:: deployment['wazuh']
115+
116+
Access to Wazuh Manager
117+
-----------------------
118+
119+
To access the Wazuh Manager dashboard, navigate to the ip address
120+
of |wazuh_manager_name| (|wazuh_manager_url|).
121+
122+
You can login to the dashboard with the username ``admin``. The
123+
password for ``admin`` is defined in the secret
124+
``opendistro_admin_password`` which can be found within
125+
``etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-secrets.yml``.
126+
127+
.. note::
128+
129+
Use ``ansible-vault`` to view Wazuh secrets:
130+
131+
.. code-block:: console
132+
:substitutions:
133+
134+
kayobe# ansible-vault view --vault-password-file |vault_password_file_path| $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-secrets.yml

source/baremetal_management.rst

Lines changed: 210 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,218 @@
1-
.. include:: vars.rst
2-
31
======================================
42
Bare Metal Compute Hardware Management
53
======================================
64

7-
.. ifconfig:: deployment['ironic']
5+
Bare metal compute nodes are managed by the Ironic services.
6+
This section describes elements of the configuration of this service.
7+
8+
.. _ironic-node-lifecycle:
9+
10+
Ironic node life cycle
11+
----------------------
12+
13+
The deployment process is documented in the `Ironic User Guide <https://docs.openstack.org/ironic/latest/user/index.html>`__.
14+
OpenStack deployment uses the
15+
`direct deploy method <https://docs.openstack.org/ironic/latest/user/index.html#example-1-pxe-boot-and-direct-deploy-process>`__.
16+
17+
The Ironic state machine can be found `here <https://docs.openstack.org/ironic/latest/user/states.html>`__. The rest of
18+
this documentation refers to these states and assumes that you have familiarity.
19+
20+
High level overview of state transitions
21+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
22+
23+
The following section attempts to describe the state transitions for various Ironic operations at a high level.
24+
It focuses on trying to describe the steps where dynamic switch reconfiguration is triggered.
25+
For a more detailed overview, refer to the :ref:`ironic-node-lifecycle` section.
26+
27+
Provisioning
28+
~~~~~~~~~~~~
29+
30+
Provisioning starts when an instance is created in Nova using a bare metal flavor.
31+
32+
- Node starts in the available state (available)
33+
- User provisions an instance (deploying)
34+
- Ironic will switch the node onto the provisioning network (deploying)
35+
- Ironic will power on the node and will await a callback (wait-callback)
36+
- Ironic will image the node with an operating system using the image provided at creation (deploying)
37+
- Ironic switches the node onto the tenant network(s) via neutron (deploying)
38+
- Transition node to active state (active)
39+
40+
.. _baremetal-management-deprovisioning:
41+
42+
Deprovisioning
43+
~~~~~~~~~~~~~~
44+
45+
Deprovisioning starts when an instance created in Nova using a bare metal flavor is destroyed.
46+
47+
If automated cleaning is enabled, it occurs when nodes are deprovisioned.
48+
49+
- Node starts in active state (active)
50+
- User deletes instance (deleting)
51+
- Ironic will remove the node from any tenant network(s) (deleting)
52+
- Ironic will switch the node onto the cleaning network (deleting)
53+
- Ironic will power on the node and will await a callback (clean-wait)
54+
- Node boots into Ironic Python Agent and issues callback, Ironic starts cleaning (cleaning)
55+
- Ironic removes node from cleaning network (cleaning)
56+
- Node transitions to available (available)
57+
58+
If automated cleaning is disabled.
59+
60+
- Node starts in active state (active)
61+
- User deletes instance (deleting)
62+
- Ironic will remove the node from any tenant network(s) (deleting)
63+
- Node transitions to available (available)
64+
65+
Cleaning
66+
~~~~~~~~
67+
68+
Manual cleaning is not part of the regular state transitions when using Nova, however nodes can be manually cleaned by administrators.
69+
70+
- Node starts in the manageable state (manageable)
71+
- User triggers cleaning with API (cleaning)
72+
- Ironic will switch the node onto the cleaning network (cleaning)
73+
- Ironic will power on the node and will await a callback (clean-wait)
74+
- Node boots into Ironic Python Agent and issues callback, Ironic starts cleaning (cleaning)
75+
- Ironic removes node from cleaning network (cleaning)
76+
- Node transitions back to the manageable state (manageable)
77+
78+
Rescuing
79+
~~~~~~~~
80+
81+
Feature not used. The required rescue network is not currently configured.
82+
83+
Baremetal networking
84+
--------------------
85+
86+
Baremetal networking with the Neutron Networking Generic Switch ML2 driver requires a combination of static and dynamic switch configuration.
87+
88+
.. _static-switch-config:
89+
90+
Static switch configuration
91+
~~~~~~~~~~~~~~~~~~~~~~~~~~~
92+
93+
Static physical network configuration is managed via Kayobe.
94+
95+
.. TODO: Fill in the switch configuration
96+
97+
- Some initial switch configuration is required before networking generic switch can take over the management of an interface.
98+
First, LACP must be configured on the switch ports attached to the baremetal node, e.g:
99+
100+
.. code-block:: shell
101+
102+
The interface is then partially configured:
103+
104+
.. code-block:: shell
105+
106+
For :ref:`ironic-node-discovery` to work, you need to manually switch the port to the provisioning network:
107+
108+
**NOTE**: You only need to do this if Ironic isn't aware of the node.
109+
110+
Configuration with kayobe
111+
^^^^^^^^^^^^^^^^^^^^^^^^^
112+
113+
Kayobe can be used to apply the :ref:`static-switch-config`.
114+
115+
- Upstream documentation can be found `here <https://docs.openstack.org/kayobe/latest/configuration/reference/physical-network.html>`__.
116+
- Kayobe does all the switch configuration that isn't :ref:`dynamically updated using Ironic <dynamic-switch-configuration>`.
117+
- Optionally switches the node onto the provisioning network (when using ``--enable-discovery``)
118+
119+
+ NOTE: This is a dangerous operation as it can wipe out the dynamic VLAN configuration applied by neutron/ironic.
120+
You should only run this when initially enrolling a node, and should always use the ``interface-description-limit`` option. For example:
121+
122+
.. code-block::
123+
124+
kayobe physical network configure --interface-description-limit <description> --group switches --display --enable-discovery
125+
126+
In this example, ``--display`` is used to preview the switch configuration without applying it.
127+
128+
.. TODO: Fill in information about how switches are configured in kayobe-config, with links
129+
130+
- Configuration is done using a combination of ``group_vars`` and ``host_vars``
131+
132+
.. _dynamic-switch-configuration:
133+
134+
Dynamic switch configuration
135+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
136+
137+
Ironic dynamically configures the switches using the Neutron `Networking Generic Switch <https://docs.openstack.org/networking-generic-switch/latest/>`_ ML2 driver.
138+
139+
- Used to toggle the baremetal nodes onto different networks
140+
141+
+ Can use any VLAN network defined in OpenStack, providing that the VLAN has been trunked to the controllers
142+
as this is required for DHCP to function.
143+
+ See :ref:`ironic-node-lifecycle`. This attempts to illustrate when any switch reconfigurations happen.
144+
145+
- Only configures VLAN membership of the switch interfaces or port groups. To prevent conflicts with the static switch configuration,
146+
the convention used is: after the node is in service in Ironic, VLAN membership should not be manually adjusted and
147+
should be left to be controlled by ironic i.e *don't* use ``--enable-discovery`` without an interface limit when configuring the
148+
switches with kayobe.
149+
- Ironic is configured to use the neutron networking driver.
150+
151+
.. _ngs-commands:
152+
153+
Commands that NGS will execute
154+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
155+
156+
Networking Generic Switch is mainly concerned with toggling the ports onto different VLANs. It
157+
cannot fully configure the switch.
158+
159+
.. TODO: Fill in the switch configuration
160+
161+
- Switching the port onto the provisioning network
162+
163+
.. code-block:: shell
164+
165+
- Switching the port onto the tenant network.
166+
167+
.. code-block:: shell
168+
169+
- When deleting the instance, the VLANs are removed from the port. Using:
170+
171+
.. code-block:: shell
172+
173+
NGS will save the configuration after each reconfiguration (by default).
174+
175+
Ports managed by NGS
176+
^^^^^^^^^^^^^^^^^^^^
177+
178+
The command below extracts a list of port UUID, node UUID and switch port information.
179+
180+
.. code-block:: bash
181+
182+
openstack baremetal port list --field uuid --field node_uuid --field local_link_connection --format value
183+
184+
NGS will manage VLAN membership for ports when the ``local_link_connection`` fields match one of the switches in ``ml2_conf.ini``.
185+
The rest of the switch configuration is static.
186+
The switch configuration that NGS will apply to these ports is detailed in :ref:`dynamic-switch-configuration`.
187+
188+
.. _ironic-node-discovery:
189+
190+
Ironic node discovery
191+
---------------------
192+
193+
Please refer to `Baremetal Compute Node Management <https://docs.openstack.org/kayobe/latest/administration/bare-metal.html>`__.
194+
195+
196+
.. _tor-switch-configuration:
197+
198+
Top of Rack (ToR) switch configuration
199+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
200+
201+
Networking Generic Switch must be aware of the Top-of-Rack switch connected to the new node.
202+
Switches managed by NGS are configured in ``ml2_conf.ini``.
203+
204+
.. TODO: Fill in details about how switches are added to NGS config in kayobe-config
205+
206+
After adding switches to the NGS configuration, Neutron must be redeployed.
8207

9-
The |project_name| cloud includes bare metal compute nodes managed by the
10-
Ironic services. This section describes elements of the configuration of
11-
this service.
208+
Considerations when booting baremetal compared to VMs
209+
------------------------------------------------------
12210

13-
.. include:: include/baremetal_management.rst
211+
- You can only use networks of type: vlan
212+
- Without using trunk ports, it is only possible to directly attach one network to each port or port group of an instance.
14213

15-
.. ifconfig:: not deployment['ironic']
214+
* To access other networks you can use routers
215+
* You can still attach floating IPs
16216

17-
The |project_name| cloud does not include bare metal compute nodes managed
18-
by the Ironic services.
217+
- Instances take much longer to provision (expect at least 15 mins)
218+
- When booting an instance use one of the flavors that maps to a baremetal node via the RESOURCE_CLASS configured on the flavor.

0 commit comments

Comments
 (0)