Skip to content

Commit 003d52b

Browse files
authored
Merge pull request ceph#62426 from anthonyeleven/fixup-db_slots
doc/cephadm/services: Correct indentation in osd.rst Reviewed-by: Zac Dover <[email protected]>
2 parents 6fc1a6d + 7b5f73f commit 003d52b

File tree

1 file changed

+25
-14
lines changed

1 file changed

+25
-14
lines changed

doc/cephadm/services/osd.rst

Lines changed: 25 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,9 @@ List Devices
77
============
88

99
``ceph-volume`` scans each host in the cluster periodically in order
10-
to determine which devices are present and whether they are eligible to be
11-
used as OSDs.
10+
to determine the devices that are present and responsive. It is also
11+
determined whether each is eligible to be used for new OSDs in a block,
12+
DB, or WAL role.
1213

1314
To print a list of devices discovered by ``cephadm``, run this command:
1415

@@ -31,10 +32,7 @@ Example::
3132
srv-03 /dev/sdc hdd 15R0A0P7FRD6 300G Unknown N/A N/A No
3233
srv-03 /dev/sdd hdd 15R0A0O7FRD6 300G Unknown N/A N/A No
3334

34-
The ``--wide`` option shows device details,
35-
including any reasons that the device might not be eligible for use as an OSD.
36-
37-
In the above example you can see fields named ``Health``, ``Ident``, and ``Fault``.
35+
In the above examples you can see fields named ``Health``, ``Ident``, and ``Fault``.
3836
This information is provided by integration with `libstoragemgmt`_. By default,
3937
this integration is disabled because `libstoragemgmt`_ may not be 100%
4038
compatible with your hardware. To direct Ceph to include these fields,
@@ -44,8 +42,19 @@ enable ``cephadm``'s "enhanced device scan" option as follows:
4442

4543
ceph config set mgr mgr/cephadm/device_enhanced_scan true
4644

45+
Note that the columns reported by ``ceph orch device ls`` may vary from release to
46+
release.
47+
48+
The ``--wide`` option shows device details,
49+
including any reasons that the device might not be eligible for use as an OSD.
50+
Example (Reef)::
51+
52+
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
53+
davidsthubbins /dev/sdc hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010W339UW25 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
54+
nigeltufnel /dev/sdd hdd SEAGATE_ST20000NM002D_ZVTBJNGC17010C3442787 18.1T No 22m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
55+
4756
.. warning::
48-
Although the ``libstoragemgmt`` library performs standard SCSI inquiry calls,
57+
Although the ``libstoragemgmt`` library issues standard SCSI (SES) inquiry calls,
4958
there is no guarantee that your hardware and firmware properly implement these standards.
5059
This can lead to erratic behaviour and even bus resets on some older
5160
hardware. It is therefore recommended that, before enabling this feature,
@@ -732,8 +741,10 @@ There are multiple optional settings that specify the way OSDs are deployed.
732741
Add these options to an OSD spec for them to take effect.
733742

734743
This example deploys encrypted OSDs on all unused drives. Note that if Linux
735-
MD mirroring is used for the boot, `/var/log`, or other volumes this spec _may_
744+
MD mirroring is used for the boot, ``/var/log``, or other volumes this spec *may*
736745
grab replacement or added drives before you can employ them for non-OSD purposes.
746+
The ``unmanaged`` attribute may be set to pause automatic deployment until you
747+
are ready.
737748

738749
.. code-block:: yaml
739750
@@ -884,19 +895,19 @@ This can be specificed with two service specs in the same file:
884895
db_devices:
885896
model: MC-55-44-XZ # Select only this model for WAL+DB offload
886897
limit: 2 # Select at most two for this purpose
887-
db_slots: 5 # Back five slower HDD data devices with each
888-
898+
db_slots: 5 # Chop the DB device into this many slices and
899+
# use one for each of this many HDD OSDs
889900
---
890901
service_type: osd
891902
service_id: osd_spec_ssd # Unique so it doesn't overwrite the above
892903
placement:
893904
host_pattern: '*'
894-
spec:
905+
spec: # This scenario is uncommon
895906
data_devices:
896907
model: MC-55-44-XZ # Select drives of this model for OSD data
897-
db_devices:
898-
vendor: VendorC # Select drives of this brand for WAL+DB
899-
db_slots: 2 # Back two slower SAS/SATA SSD data devices with each
908+
db_devices: # Select drives of this brand for WAL+DB. Since the
909+
vendor: VendorC # data devices are SAS/SATA SSDs this would make sense for NVMe SSDs
910+
db_slots: 2 # Back two slower SAS/SATA SSD data devices with each NVMe slice
900911
901912
This would create the desired layout by using all HDDs as data devices with two
902913
SATA/SAS SSDs assigned as dedicated DB/WAL devices, each backing five HDD OSDs.

0 commit comments

Comments
 (0)