Skip to content

Commit 9fb51bb

Browse files
committed
doc/cephadm: improve host-management.rst
Signed-off-by: Anthony D'Atri <[email protected]>
1 parent 2b0101e commit 9fb51bb

File tree

1 file changed

+31
-27
lines changed

1 file changed

+31
-27
lines changed

doc/cephadm/host-management.rst

Lines changed: 31 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -74,9 +74,9 @@ To add each new host to the cluster, perform two steps:
7474
ceph orch host add host2 10.10.0.102
7575
ceph orch host add host3 10.10.0.103
7676

77-
It is best to explicitly provide the host IP address. If an IP is
77+
It is best to explicitly provide the host IP address. If an address is
7878
not provided, then the host name will be immediately resolved via
79-
DNS and that IP will be used.
79+
DNS and the result will be used.
8080

8181
One or more labels can also be included to immediately label the
8282
new host. For example, by default the ``_admin`` label will make
@@ -104,7 +104,7 @@ To drain all daemons from a host, run a command of the following form:
104104
The ``_no_schedule`` and ``_no_conf_keyring`` labels will be applied to the
105105
host. See :ref:`cephadm-special-host-labels`.
106106

107-
If you only want to drain daemons but leave managed ceph conf and keyring
107+
If you want to drain daemons but leave managed `ceph.conf` and keyring
108108
files on the host, you may pass the ``--keep-conf-keyring`` flag to the
109109
drain command.
110110

@@ -115,7 +115,8 @@ drain command.
115115
This will apply the ``_no_schedule`` label to the host but not the
116116
``_no_conf_keyring`` label.
117117

118-
All OSDs on the host will be scheduled to be removed. You can check the progress of the OSD removal operation with the following command:
118+
All OSDs on the host will be scheduled to be removed. You can check
119+
progress of the OSD removal operation with the following command:
119120

120121
.. prompt:: bash #
121122

@@ -148,7 +149,7 @@ cluster by running the following command:
148149
Offline host removal
149150
--------------------
150151

151-
Even if a host is offline and can not be recovered, it can be removed from the
152+
If a host is offline and can not be recovered, it can be removed from the
152153
cluster by running a command of the following form:
153154

154155
.. prompt:: bash #
@@ -250,8 +251,8 @@ Rescanning Host Devices
250251
=======================
251252

252253
Some servers and external enclosures may not register device removal or insertion with the
253-
kernel. In these scenarios, you'll need to perform a host rescan. A rescan is typically
254-
non-disruptive, and can be performed with the following CLI command:
254+
kernel. In these scenarios, you'll need to perform a device rescan on the appropriate host.
255+
A rescan is typically non-disruptive, and can be performed with the following CLI command:
255256

256257
.. prompt:: bash #
257258

@@ -316,40 +317,40 @@ create a new CRUSH host located in the specified hierarchy.
316317

317318
The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
318319
changes of the ``location`` property will be ignored. Also, removing a host will not remove
319-
any CRUSH buckets unless the ``--rm-crush-entry`` flag is provided to the ``orch host rm`` command
320+
an associated CRUSH bucket unless the ``--rm-crush-entry`` flag is provided to the ``orch host rm`` command
320321

321322
See also :ref:`crush_map_default_types`.
322323

323324
Removing a host from the CRUSH map
324325
==================================
325326

326-
The ``ceph orch host rm`` command has support for removing the bucket entry for the host
327-
in the CRUSH map. This is done by providing the ``--rm-crush-entry`` flag.
327+
The ``ceph orch host rm`` command has support for removing the associated host bucket
328+
from the CRUSH map. This is done by providing the ``--rm-crush-entry`` flag.
328329

329330
.. prompt:: bash [ceph:root@host1/]#
330331

331332
ceph orch host rm host1 --rm-crush-entry
332333

333-
When this flag is specified, cephadm will attempt to remove the bucket entry
334-
for the host from the CRUSH map as part of the host removal process. Note that if
334+
When this flag is specified, cephadm will attempt to remove the host bucket
335+
from the CRUSH map as part of the host removal process. Note that if
335336
it fails to do so, cephadm will report the failure and the host will remain under
336337
cephadm control.
337338

338339
.. note::
339340

340-
The removal from the CRUSH map will fail if there are OSDs deployed on the
341+
Removal from the CRUSH map will fail if there are OSDs deployed on the
341342
host. If you would like to remove all the host's OSDs as well, please start
342343
by using the ``ceph orch host drain`` command to do so. Once the OSDs
343-
are all gone, then you may have cephadm remove the CRUSH entry along with the
344-
host using the ``--rm-crush-entry`` flag.
344+
have been removed, then you may direct cephadm remove the CRUSH bucket
345+
along with the host using the ``--rm-crush-entry`` flag.
345346

346347
OS Tuning Profiles
347348
==================
348349

349-
Cephadm can be used to manage operating-system-tuning profiles that apply sets
350-
of sysctl settings to sets of hosts.
350+
Cephadm can be used to manage operating system tuning profiles that apply
351+
``sysctl`` settings to sets of hosts.
351352

352-
Create a YAML spec file in the following format:
353+
To do so, create a YAML spec file in the following format:
353354

354355
.. code-block:: yaml
355356
@@ -368,18 +369,21 @@ Apply the tuning profile with the following command:
368369

369370
ceph orch tuned-profile apply -i <tuned-profile-file-name>
370371

371-
This profile is written to ``/etc/sysctl.d/`` on each host that matches the
372-
hosts specified in the placement block of the yaml, and ``sysctl --system`` is
372+
This profile is written to a file under ``/etc/sysctl.d/`` on each host
373+
specified in the ``placement`` block, then ``sysctl --system`` is
373374
run on the host.
374375

375376
.. note::
376377

377378
The exact filename that the profile is written to within ``/etc/sysctl.d/``
378379
is ``<profile-name>-cephadm-tuned-profile.conf``, where ``<profile-name>`` is
379-
the ``profile_name`` setting that you specify in the YAML spec. Because
380+
the ``profile_name`` setting that you specify in the YAML spec. We suggest
381+
naming these profiles following the usual ``sysctl.d`` `NN-xxxxx` convention. Because
380382
sysctl settings are applied in lexicographical order (sorted by the filename
381-
in which the setting is specified), you may want to set the ``profile_name``
382-
in your spec so that it is applied before or after other conf files.
383+
in which the setting is specified), you may want to carefully choose
384+
the ``profile_name`` in your spec so that it is applied before or after other
385+
conf files. Careful selection ensures that values supplied here override or
386+
do not override those in other ``sysctl.d`` files as desired.
383387

384388
.. note::
385389

@@ -388,7 +392,7 @@ run on the host.
388392

389393
.. note::
390394

391-
Applying tuned profiles is idempotent when the ``--no-overwrite`` option is
395+
Applying tuning profiles is idempotent when the ``--no-overwrite`` option is
392396
passed. Moreover, if the ``--no-overwrite`` option is passed, existing
393397
profiles with the same name are not overwritten.
394398

@@ -548,7 +552,7 @@ There are two ways to customize this configuration for your environment:
548552

549553
We do *not recommend* this approach. The path name must be
550554
visible to *any* mgr daemon, and cephadm runs all daemons as
551-
containers. That means that the file either need to be placed
555+
containers. That means that the file must either be placed
552556
inside a customized container image for your deployment, or
553557
manually distributed to the mgr data directory
554558
(``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
@@ -601,8 +605,8 @@ Note that ``man hostname`` recommends ``hostname`` to return the bare
601605
host name:
602606

603607
The FQDN (Fully Qualified Domain Name) of the system is the
604-
name that the resolver(3) returns for the host name, such as,
605-
ursula.example.com. It is usually the hostname followed by the DNS
608+
name that the resolver(3) returns for the host name, for example
609+
``ursula.example.com``. It is usually the short hostname followed by the DNS
606610
domain name (the part after the first dot). You can check the FQDN
607611
using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
608612

0 commit comments

Comments
 (0)