@@ -11,9 +11,9 @@ please refer to :ref:`cephadm-kayobe` documentation.
1111Cephadm configuration location
1212------------------------------
1313
14- In kayobe-config repository, under ``etc/kayobe /cephadm.yml `` (or in a specific
14+ In kayobe-config repository, under ``$KAYOBE_CONFIG_PATH /cephadm.yml `` (or in a specific
1515Kayobe environment when using multiple environment, e.g.
16- ``etc/kayobe/ environments/<Environment Name >/cephadm.yml ``)
16+ ``$KAYOBE_CONFIG_PATH/ environments/<environment name >/cephadm.yml ``)
1717
1818StackHPC's Cephadm Ansible collection relies on multiple inventory groups:
1919
@@ -22,12 +22,12 @@ StackHPC's Cephadm Ansible collection relies on multiple inventory groups:
2222- ``osds ``
2323- ``rgws `` (optional)
2424
25- Those groups are usually defined in ``etc/kayobe /inventory/groups ``.
25+ Those groups are usually defined in ``$KAYOBE_CONFIG_PATH /inventory/groups ``.
2626
2727Running Cephadm playbooks
2828-------------------------
2929
30- In kayobe-config repository, under ``etc/kayobe /ansible `` there is a set of
30+ In kayobe-config repository, under ``$KAYOBE_CONFIG_PATH /ansible `` there is a set of
3131Cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
3232
3333``cephadm.yml `` runs the end to end process of Cephadm deployment and
@@ -38,14 +38,14 @@ and they can be run separately.
3838 additional playbooks
3939- ``cephadm-commands-pre.yml `` - Runs Ceph commands before post-deployment
4040 configuration (You can set a list of commands at ``cephadm_commands_pre_extra ``
41- in ``cephadm.yml ``)
41+ variable in ``$KAYOBE_CONFIG_PATH/ cephadm.yml ``)
4242- ``cephadm-ec-profiles.yml `` - Defines Ceph EC profiles
4343- ``cephadm-crush-rules.yml `` - Defines Ceph crush rules according
4444- ``cephadm-pools.yml `` - Defines Ceph pools
4545- ``cephadm-keys.yml `` - Defines Ceph users/keys
4646- ``cephadm-commands-post.yml `` - Runs Ceph commands after post-deployment
4747 configuration (You can set a list of commands at ``cephadm_commands_post_extra ``
48- in ``cephadm.yml ``)
48+ variable in ``$KAYOBE_CONFIG_PATH/ cephadm.yml ``)
4949
5050There are also other Ceph playbooks that are not part of ``cephadm.yml ``
5151
@@ -102,7 +102,7 @@ Once all daemons are removed - you can remove the host:
102102 ceph orch host rm <host>
103103
104104 And then remove the host from inventory (usually in
105- ``etc/kayobe /inventory/overcloud ``)
105+ ``$KAYOBE_CONFIG_PATH /inventory/overcloud ``)
106106
107107Additional options/commands may be found in
108108`Host management <https://docs.ceph.com/en/latest/cephadm/host-management/ >`_
@@ -194,7 +194,7 @@ After removing OSDs, if the drives the OSDs were deployed on once again become
194194available, Cephadm may automatically try to deploy more OSDs on these drives if
195195they match an existing drivegroup spec.
196196If this is not your desired action plan - it's best to modify the drivegroup
197- spec before (``cephadm_osd_spec `` variable in ``etc/kayobe /cephadm.yml ``).
197+ spec before (``cephadm_osd_spec `` variable in ``$KAYOBE_CONFIG_PATH /cephadm.yml ``).
198198Either set ``unmanaged: true `` to stop Cephadm from picking up new disks or
199199modify it in some way that it no longer matches the drives you want to remove.
200200
0 commit comments