@@ -11,9 +11,9 @@ please refer to :ref:`cephadm-kayobe` documentation.
11
11
Cephadm configuration location
12
12
------------------------------
13
13
14
- In kayobe-config repository, under ``etc/kayobe /cephadm.yml `` (or in a specific
14
+ In kayobe-config repository, under ``$KAYOBE_CONFIG_PATH /cephadm.yml `` (or in a specific
15
15
Kayobe environment when using multiple environment, e.g.
16
- ``etc/kayobe/ environments/<Environment Name >/cephadm.yml ``)
16
+ ``$KAYOBE_CONFIG_PATH/ environments/<environment name >/cephadm.yml ``)
17
17
18
18
StackHPC's Cephadm Ansible collection relies on multiple inventory groups:
19
19
@@ -22,12 +22,12 @@ StackHPC's Cephadm Ansible collection relies on multiple inventory groups:
22
22
- ``osds ``
23
23
- ``rgws `` (optional)
24
24
25
- Those groups are usually defined in ``etc/kayobe /inventory/groups ``.
25
+ Those groups are usually defined in ``$KAYOBE_CONFIG_PATH /inventory/groups ``.
26
26
27
27
Running Cephadm playbooks
28
28
-------------------------
29
29
30
- In kayobe-config repository, under ``etc/kayobe /ansible `` there is a set of
30
+ In kayobe-config repository, under ``$KAYOBE_CONFIG_PATH /ansible `` there is a set of
31
31
Cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
32
32
33
33
``cephadm.yml `` runs the end to end process of Cephadm deployment and
@@ -38,14 +38,14 @@ and they can be run separately.
38
38
additional playbooks
39
39
- ``cephadm-commands-pre.yml `` - Runs Ceph commands before post-deployment
40
40
configuration (You can set a list of commands at ``cephadm_commands_pre_extra ``
41
- in ``cephadm.yml ``)
41
+ variable in ``$KAYOBE_CONFIG_PATH/ cephadm.yml ``)
42
42
- ``cephadm-ec-profiles.yml `` - Defines Ceph EC profiles
43
43
- ``cephadm-crush-rules.yml `` - Defines Ceph crush rules according
44
44
- ``cephadm-pools.yml `` - Defines Ceph pools
45
45
- ``cephadm-keys.yml `` - Defines Ceph users/keys
46
46
- ``cephadm-commands-post.yml `` - Runs Ceph commands after post-deployment
47
47
configuration (You can set a list of commands at ``cephadm_commands_post_extra ``
48
- in ``cephadm.yml ``)
48
+ variable in ``$KAYOBE_CONFIG_PATH/ cephadm.yml ``)
49
49
50
50
There are also other Ceph playbooks that are not part of ``cephadm.yml ``
51
51
@@ -102,7 +102,7 @@ Once all daemons are removed - you can remove the host:
102
102
ceph orch host rm <host>
103
103
104
104
And then remove the host from inventory (usually in
105
- ``etc/kayobe /inventory/overcloud ``)
105
+ ``$KAYOBE_CONFIG_PATH /inventory/overcloud ``)
106
106
107
107
Additional options/commands may be found in
108
108
`Host management <https://docs.ceph.com/en/latest/cephadm/host-management/ >`_
@@ -194,7 +194,7 @@ After removing OSDs, if the drives the OSDs were deployed on once again become
194
194
available, Cephadm may automatically try to deploy more OSDs on these drives if
195
195
they match an existing drivegroup spec.
196
196
If this is not your desired action plan - it's best to modify the drivegroup
197
- spec before (``cephadm_osd_spec `` variable in ``etc/kayobe /cephadm.yml ``).
197
+ spec before (``cephadm_osd_spec `` variable in ``$KAYOBE_CONFIG_PATH /cephadm.yml ``).
198
198
Either set ``unmanaged: true `` to stop Cephadm from picking up new disks or
199
199
modify it in some way that it no longer matches the drives you want to remove.
200
200
0 commit comments