@@ -55,7 +55,7 @@ Configuring Prometheus Alerts
55
55
-----------------------------
56
56
57
57
Alerts are defined in code and stored in Kayobe configuration. See ``*.rules ``
58
- files in ``${ KAYOBE_CONFIG_PATH} /kolla/config/prometheus `` as a model to add
58
+ files in ``$KAYOBE_CONFIG_PATH/kolla/config/prometheus `` as a model to add
59
59
custom rules.
60
60
61
61
Silencing Prometheus Alerts
@@ -88,7 +88,7 @@ Generating Alerts from Metrics
88
88
++++++++++++++++++++++++++++++
89
89
90
90
Alerts are defined in code and stored in Kayobe configuration. See ``*.rules ``
91
- files in ``${ KAYOBE_CONFIG_PATH} /kolla/config/prometheus `` as a model to add
91
+ files in ``$KAYOBE_CONFIG_PATH/kolla/config/prometheus `` as a model to add
92
92
custom rules.
93
93
94
94
Control Plane Shutdown Procedure
@@ -124,7 +124,7 @@ The password can be found using:
124
124
125
125
.. code-block :: console
126
126
127
- kayobe# ansible-vault view ${ KAYOBE_CONFIG_PATH} /kolla/passwords.yml \
127
+ kayobe# ansible-vault view $KAYOBE_CONFIG_PATH/kolla/passwords.yml \
128
128
--vault-password-file <Vault password file path> | grep ^database
129
129
130
130
Checking RabbitMQ
@@ -135,6 +135,7 @@ RabbitMQ health is determined using the command ``rabbitmqctl cluster_status``:
135
135
.. code-block :: console
136
136
137
137
[stack@controller0 ~]$ docker exec rabbitmq rabbitmqctl cluster_status
138
+
138
139
Cluster status of node rabbit@controller0 ...
139
140
[{nodes,[{disc,['rabbit@controller0','rabbit@controller1',
140
141
'rabbit@controller2']}]},
@@ -180,20 +181,18 @@ If you are shutting down a single hypervisor, to avoid down time to tenants it
180
181
is advisable to migrate all of the instances to another machine. See
181
182
:ref: `evacuating-all-instances `.
182
183
183
- .. ifconfig :: deployment['ceph_managed']
184
-
185
- Ceph
186
- ----
184
+ Ceph
185
+ ----
187
186
188
- The following guide provides a good overview:
189
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/director_installation_and_usage/sect-rebooting-ceph
187
+ The following guide provides a good overview:
188
+ https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/director_installation_and_usage/sect-rebooting-ceph
190
189
191
190
Shutting down the seed VM
192
191
-------------------------
193
192
194
193
.. code-block :: console
195
194
196
- kayobe# virsh shutdown <Seed node >
195
+ kayobe# virsh shutdown <Seed hostname >
197
196
198
197
.. _full-shutdown :
199
198
@@ -262,7 +261,7 @@ hypervisor is powered on. If it does not, it can be started with:
262
261
263
262
.. code-block :: console
264
263
265
- kayobe# virsh start seed-0
264
+ kayobe# virsh start <Seed hostname>
266
265
267
266
Full power on
268
267
-------------
@@ -340,13 +339,14 @@ To see the list of hypervisor names:
340
339
341
340
.. code-block :: console
342
341
343
- admin# openstack hypervisor list
342
+ # From host that can reach Openstack
343
+ openstack hypervisor list
344
344
345
345
To boot an instance on a specific hypervisor
346
346
347
347
.. code-block :: console
348
348
349
- admin# openstack server create --flavor <Flavour name>--network <Network name> --key-name <key> --image <Image name> --availability-zone nova::<Hypervisor name> <VM name>
349
+ openstack server create --flavor <Flavour name>--network <Network name> --key-name <key> --image <Image name> --availability-zone nova::<Hypervisor name> <VM name>
350
350
351
351
Cleanup Procedures
352
352
==================
@@ -360,22 +360,23 @@ perform the following cleanup procedure regularly:
360
360
361
361
.. code-block :: console
362
362
363
- admin# for user in $(openstack user list --domain magnum -f value -c Name | grep -v magnum_trustee_domain_admin); do
364
- if openstack coe cluster list -c uuid -f value | grep -q $(echo $user | sed 's/_[0-9a-f]*$//'); then
365
- echo "$user still in use, not deleting"
366
- else
367
- openstack user delete --domain magnum $user
368
- fi
369
- done
363
+ for user in $(openstack user list --domain magnum -f value -c Name | grep -v magnum_trustee_domain_admin); do
364
+ if openstack coe cluster list -c uuid -f value | grep -q $(echo $user | sed 's/_[0-9a-f]*$//'); then
365
+ echo "$user still in use, not deleting"
366
+ else
367
+ openstack user delete --domain magnum $user
368
+ fi
369
+ done
370
370
371
371
OpenSearch indexes retention
372
372
=============================
373
373
374
374
To alter default rotation values for OpenSearch, edit
375
375
376
- ``${ KAYOBE_CONFIG_PATH} /kolla/globals.yml ``:
376
+ ``$KAYOBE_CONFIG_PATH/kolla/globals.yml ``:
377
377
378
378
.. code-block :: console
379
+
379
380
# Duration after which index is closed (default 30)
380
381
opensearch_soft_retention_period_days: 90
381
382
# Duration after which index is deleted (default 60)
@@ -384,8 +385,8 @@ To alter default rotation values for OpenSearch, edit
384
385
Reconfigure Opensearch with new values:
385
386
386
387
.. code-block :: console
387
- kayobe overcloud service reconfigure --kolla-tags opensearch
388
388
389
- For more information see the ` upstream documentation
389
+ kayobe# kayobe overcloud service reconfigure --kolla-tags opensearch
390
390
391
+ For more information see the `upstream documentation
391
392
<https://docs.openstack.org/kolla-ansible/latest/reference/logging-and-monitoring/central-logging-guide.html#applying-log-retention-policies> `__.
0 commit comments