Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 134 additions & 0 deletions source/access_to_services.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
.. include:: vars.rst

==================
Access to Services
==================

Openstack Services
==================

Accessing to Horizon
--------------------

The OpenStack web UI is available at: |horizon_url|

This site is accessible |horizon_access|.

Accessing the OpenStack CLI
---------------------------

A simple way to get started with accessing the OpenStack command-line
interface.

This can be done from |public_api_access_host| (for example), or any machine
that has access to |public_vip|:

.. code-block:: console

openstack# python3 -m venv openstack-venv
openstack# source openstack-venv/bin/activate
openstack# pip install -U pip
openstack# pip install python-openstackclient
openstack# source <project>-openrc.sh

The `<project>-openrc.sh` file can be downloaded from the OpenStack Dashboard
(Horizon):

.. image:: _static/openrc.png
:alt: Downloading an openrc file from Horizon
:class: no-scaled-link
:width: 200

Now it should be possible to run OpenStack commands:

.. code-block:: console

openstack# openstack server list

Accessing Deployed Instances
----------------------------

The external network of OpenStack, called |public_network|, connects to the
subnet |public_subnet|. This network is accessible |floating_ip_access|.

Any OpenStack instance can make outgoing connections to this network, via a
router that connects the internal network of the project to the
|public_network| network.

To enable incoming connections (e.g. SSH), a floating IP is required. A
floating IP is allocated and associated via OpenStack. Security groups must be
set to permit the kind of connectivity required (i.e. to define the ports that
must be opened).

Monitoring Services
===================

Access to Opensearch Dashboard
------------------------------

OpenStack control plane logs are aggregated from all servers by Fluentd and
stored in OpenSearch. The control plane logs can be accessed from
OpenSearch using Opensearch Dashboard, which is available at the following URL:
|opensearch_dashboard_url|

To log in, use the ``opensearch`` user. The password is auto-generated by
Kolla-Ansible and can be extracted from the encrypted passwords file
(|kolla_passwords|):

.. code-block:: console
:substitutions:

kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^opensearch

Access to Grafana
-----------------

Control plane metrics can be visualised in Grafana dashboards. Grafana can be
found at the following address: |grafana_url|

To log in, use the |grafana_username| user. The password is auto-generated by
Kolla-Ansible and can be extracted from the encrypted passwords file
(|kolla_passwords|):

.. code-block:: console
:substitutions:

kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^grafana_admin_password

Access to Prometheus Alertmanager
---------------------------------

Control plane alerts can be visualised and managed in Alertmanager, which can
be found at the following address: |alertmanager_url|

To log in, use the ``admin`` user. The password is auto-generated by
Kolla-Ansible and can be extracted from the encrypted passwords file
(|kolla_passwords|):

.. code-block:: console
:substitutions:

kayobe# ansible-vault view ${KAYOBE_CONFIG_PATH}/kolla/passwords.yml --vault-password-file |vault_password_file_path| | grep ^prometheus_alertmanager_password


.. ifconfig:: deployment['wazuh']

Access to Wazuh Manager
-----------------------

To access the Wazuh Manager dashboard, navigate to the ip address
of |wazuh_manager_name| (|wazuh_manager_url|).

You can login to the dashboard with the username ``admin``. The
password for ``admin`` is defined in the secret
``opendistro_admin_password`` which can be found within
``etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-secrets.yml``.

.. note::

Use ``ansible-vault`` to view Wazuh secrets:

.. code-block:: console
:substitutions:

kayobe# ansible-vault view --vault-password-file |vault_password_file_path| $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-secrets.yml
220 changes: 210 additions & 10 deletions source/baremetal_management.rst
Original file line number Diff line number Diff line change
@@ -1,18 +1,218 @@
.. include:: vars.rst

======================================
Bare Metal Compute Hardware Management
======================================

.. ifconfig:: deployment['ironic']
Bare metal compute nodes are managed by the Ironic services.
This section describes elements of the configuration of this service.

.. _ironic-node-lifecycle:

Ironic node life cycle
----------------------

The deployment process is documented in the `Ironic User Guide <https://docs.openstack.org/ironic/latest/user/index.html>`__.
OpenStack deployment uses the
`direct deploy method <https://docs.openstack.org/ironic/latest/user/index.html#example-1-pxe-boot-and-direct-deploy-process>`__.

The Ironic state machine can be found `here <https://docs.openstack.org/ironic/latest/user/states.html>`__. The rest of
this documentation refers to these states and assumes that you have familiarity.

High level overview of state transitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The following section attempts to describe the state transitions for various Ironic operations at a high level.
It focuses on trying to describe the steps where dynamic switch reconfiguration is triggered.
For a more detailed overview, refer to the :ref:`ironic-node-lifecycle` section.

Provisioning
~~~~~~~~~~~~

Provisioning starts when an instance is created in Nova using a bare metal flavor.

- Node starts in the available state (available)
- User provisions an instance (deploying)
- Ironic will switch the node onto the provisioning network (deploying)
- Ironic will power on the node and will await a callback (wait-callback)
- Ironic will image the node with an operating system using the image provided at creation (deploying)
- Ironic switches the node onto the tenant network(s) via neutron (deploying)
- Transition node to active state (active)

.. _baremetal-management-deprovisioning:

Deprovisioning
~~~~~~~~~~~~~~

Deprovisioning starts when an instance created in Nova using a bare metal flavor is destroyed.

If automated cleaning is enabled, it occurs when nodes are deprovisioned.

- Node starts in active state (active)
- User deletes instance (deleting)
- Ironic will remove the node from any tenant network(s) (deleting)
- Ironic will switch the node onto the cleaning network (deleting)
- Ironic will power on the node and will await a callback (clean-wait)
- Node boots into Ironic Python Agent and issues callback, Ironic starts cleaning (cleaning)
- Ironic removes node from cleaning network (cleaning)
- Node transitions to available (available)

If automated cleaning is disabled.

- Node starts in active state (active)
- User deletes instance (deleting)
- Ironic will remove the node from any tenant network(s) (deleting)
- Node transitions to available (available)

Cleaning
~~~~~~~~

Manual cleaning is not part of the regular state transitions when using Nova, however nodes can be manually cleaned by administrators.

- Node starts in the manageable state (manageable)
- User triggers cleaning with API (cleaning)
- Ironic will switch the node onto the cleaning network (cleaning)
- Ironic will power on the node and will await a callback (clean-wait)
- Node boots into Ironic Python Agent and issues callback, Ironic starts cleaning (cleaning)
- Ironic removes node from cleaning network (cleaning)
- Node transitions back to the manageable state (manageable)

Rescuing
~~~~~~~~

Feature not used. The required rescue network is not currently configured.

Baremetal networking
--------------------

Baremetal networking with the Neutron Networking Generic Switch ML2 driver requires a combination of static and dynamic switch configuration.

.. _static-switch-config:

Static switch configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Static physical network configuration is managed via Kayobe.

.. TODO: Fill in the switch configuration

- Some initial switch configuration is required before networking generic switch can take over the management of an interface.
First, LACP must be configured on the switch ports attached to the baremetal node, e.g:

.. code-block:: shell

The interface is then partially configured:

.. code-block:: shell

For :ref:`ironic-node-discovery` to work, you need to manually switch the port to the provisioning network:

**NOTE**: You only need to do this if Ironic isn't aware of the node.

Configuration with kayobe
^^^^^^^^^^^^^^^^^^^^^^^^^

Kayobe can be used to apply the :ref:`static-switch-config`.

- Upstream documentation can be found `here <https://docs.openstack.org/kayobe/latest/configuration/reference/physical-network.html>`__.
- Kayobe does all the switch configuration that isn't :ref:`dynamically updated using Ironic <dynamic-switch-configuration>`.
- Optionally switches the node onto the provisioning network (when using ``--enable-discovery``)

+ NOTE: This is a dangerous operation as it can wipe out the dynamic VLAN configuration applied by neutron/ironic.
You should only run this when initially enrolling a node, and should always use the ``interface-description-limit`` option. For example:

.. code-block::

kayobe physical network configure --interface-description-limit <description> --group switches --display --enable-discovery

In this example, ``--display`` is used to preview the switch configuration without applying it.

.. TODO: Fill in information about how switches are configured in kayobe-config, with links

- Configuration is done using a combination of ``group_vars`` and ``host_vars``

.. _dynamic-switch-configuration:

Dynamic switch configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Ironic dynamically configures the switches using the Neutron `Networking Generic Switch <https://docs.openstack.org/networking-generic-switch/latest/>`_ ML2 driver.

- Used to toggle the baremetal nodes onto different networks

+ Can use any VLAN network defined in OpenStack, providing that the VLAN has been trunked to the controllers
as this is required for DHCP to function.
+ See :ref:`ironic-node-lifecycle`. This attempts to illustrate when any switch reconfigurations happen.

- Only configures VLAN membership of the switch interfaces or port groups. To prevent conflicts with the static switch configuration,
the convention used is: after the node is in service in Ironic, VLAN membership should not be manually adjusted and
should be left to be controlled by ironic i.e *don't* use ``--enable-discovery`` without an interface limit when configuring the
switches with kayobe.
- Ironic is configured to use the neutron networking driver.

.. _ngs-commands:

Commands that NGS will execute
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Networking Generic Switch is mainly concerned with toggling the ports onto different VLANs. It
cannot fully configure the switch.

.. TODO: Fill in the switch configuration

- Switching the port onto the provisioning network

.. code-block:: shell

- Switching the port onto the tenant network.

.. code-block:: shell

- When deleting the instance, the VLANs are removed from the port. Using:

.. code-block:: shell

NGS will save the configuration after each reconfiguration (by default).

Ports managed by NGS
^^^^^^^^^^^^^^^^^^^^

The command below extracts a list of port UUID, node UUID and switch port information.

.. code-block:: bash

openstack baremetal port list --field uuid --field node_uuid --field local_link_connection --format value

NGS will manage VLAN membership for ports when the ``local_link_connection`` fields match one of the switches in ``ml2_conf.ini``.
The rest of the switch configuration is static.
The switch configuration that NGS will apply to these ports is detailed in :ref:`dynamic-switch-configuration`.

.. _ironic-node-discovery:

Ironic node discovery
---------------------

Please refer to `Baremetal Compute Node Management <https://docs.openstack.org/kayobe/latest/administration/bare-metal.html>`__.


.. _tor-switch-configuration:

Top of Rack (ToR) switch configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Networking Generic Switch must be aware of the Top-of-Rack switch connected to the new node.
Switches managed by NGS are configured in ``ml2_conf.ini``.

.. TODO: Fill in details about how switches are added to NGS config in kayobe-config

After adding switches to the NGS configuration, Neutron must be redeployed.

The |project_name| cloud includes bare metal compute nodes managed by the
Ironic services. This section describes elements of the configuration of
this service.
Considerations when booting baremetal compared to VMs
------------------------------------------------------

.. include:: include/baremetal_management.rst
- You can only use networks of type: vlan
- Without using trunk ports, it is only possible to directly attach one network to each port or port group of an instance.

.. ifconfig:: not deployment['ironic']
* To access other networks you can use routers
* You can still attach floating IPs

The |project_name| cloud does not include bare metal compute nodes managed
by the Ironic services.
- Instances take much longer to provision (expect at least 15 mins)
- When booting an instance use one of the flavors that maps to a baremetal node via the RESOURCE_CLASS configured on the flavor.
Loading