Skip to content

Commit 29bb935

Browse files
authored
Merge pull request ceph#54322 from bluikko/patch-22
doc/cephadm/services: remove excess rendered indentation in osd.rst Reviewed-by: Zac Dover <[email protected]>
2 parents ab6fa2f + 329df49 commit 29bb935

File tree

1 file changed

+40
-41
lines changed

1 file changed

+40
-41
lines changed

doc/cephadm/services/osd.rst

Lines changed: 40 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,9 @@ To print a list of devices discovered by ``cephadm``, run this command:
1515

1616
.. prompt:: bash #
1717

18-
ceph orch device ls [--hostname=...] [--wide] [--refresh]
18+
ceph orch device ls [--hostname=...] [--wide] [--refresh]
1919

20-
Example
21-
::
20+
Example::
2221

2322
Hostname Path Type Serial Size Health Ident Fault Available
2423
srv-01 /dev/sdb hdd 15P0A0YFFRD6 300G Unknown N/A N/A No
@@ -44,7 +43,7 @@ enable cephadm's "enhanced device scan" option as follows;
4443

4544
.. prompt:: bash #
4645

47-
ceph config set mgr mgr/cephadm/device_enhanced_scan true
46+
ceph config set mgr mgr/cephadm/device_enhanced_scan true
4847

4948
.. warning::
5049
Although the libstoragemgmt library performs standard SCSI inquiry calls,
@@ -175,16 +174,16 @@ will happen without actually creating the OSDs.
175174

176175
For example:
177176

178-
.. prompt:: bash #
177+
.. prompt:: bash #
179178

180-
ceph orch apply osd --all-available-devices --dry-run
179+
ceph orch apply osd --all-available-devices --dry-run
181180

182-
::
181+
::
183182

184-
NAME HOST DATA DB WAL
185-
all-available-devices node1 /dev/vdb - -
186-
all-available-devices node2 /dev/vdc - -
187-
all-available-devices node3 /dev/vdd - -
183+
NAME HOST DATA DB WAL
184+
all-available-devices node1 /dev/vdb - -
185+
all-available-devices node2 /dev/vdc - -
186+
all-available-devices node3 /dev/vdd - -
188187

189188
.. _cephadm-osd-declarative:
190189

@@ -199,9 +198,9 @@ command completes will be automatically found and added to the cluster.
199198

200199
We will examine the effects of the following command:
201200

202-
.. prompt:: bash #
201+
.. prompt:: bash #
203202

204-
ceph orch apply osd --all-available-devices
203+
ceph orch apply osd --all-available-devices
205204

206205
After running the above command:
207206

@@ -214,17 +213,17 @@ If you want to avoid this behavior (disable automatic creation of OSD on availab
214213

215214
.. prompt:: bash #
216215

217-
ceph orch apply osd --all-available-devices --unmanaged=true
216+
ceph orch apply osd --all-available-devices --unmanaged=true
218217

219218
.. note::
220219

221-
Keep these three facts in mind:
220+
Keep these three facts in mind:
222221

223-
- The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected.
222+
- The default behavior of ``ceph orch apply`` causes cephadm constantly to reconcile. This means that cephadm creates OSDs as soon as new drives are detected.
224223

225-
- Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service.
224+
- Setting ``unmanaged: True`` disables the creation of OSDs. If ``unmanaged: True`` is set, nothing will happen even if you apply a new OSD service.
226225

227-
- ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
226+
- ``ceph orch daemon add`` creates OSDs, but does not add an OSD service.
228227

229228
* For cephadm, see also :ref:`cephadm-spec-unmanaged`.
230229

@@ -252,7 +251,7 @@ Example:
252251

253252
Expected output::
254253

255-
Scheduled OSD(s) for removal
254+
Scheduled OSD(s) for removal
256255

257256
OSDs that are not safe to destroy will be rejected.
258257

@@ -275,14 +274,14 @@ You can query the state of OSD operation with the following command:
275274

276275
.. prompt:: bash #
277276

278-
ceph orch osd rm status
277+
ceph orch osd rm status
279278

280279
Expected output::
281280

282-
OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
283-
2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
284-
3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
285-
4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
281+
OSD_ID HOST STATE PG_COUNT REPLACE FORCE STARTED_AT
282+
2 cephadm-dev done, waiting for purge 0 True False 2020-07-17 13:01:43.147684
283+
3 cephadm-dev draining 17 False True 2020-07-17 13:01:45.162158
284+
4 cephadm-dev started 42 False True 2020-07-17 13:01:45.162158
286285

287286

288287
When no PGs are left on the OSD, it will be decommissioned and removed from the cluster.
@@ -304,11 +303,11 @@ Example:
304303

305304
.. prompt:: bash #
306305

307-
ceph orch osd rm stop 4
306+
ceph orch osd rm stop 4
308307

309308
Expected output::
310309

311-
Stopped OSD(s) removal
310+
Stopped OSD(s) removal
312311

313312
This resets the initial state of the OSD and takes it off the removal queue.
314313

@@ -329,7 +328,7 @@ Example:
329328

330329
Expected output::
331330

332-
Scheduled OSD(s) for replacement
331+
Scheduled OSD(s) for replacement
333332

334333
This follows the same procedure as the procedure in the "Remove OSD" section, with
335334
one exception: the OSD is not permanently removed from the CRUSH hierarchy, but is
@@ -436,10 +435,10 @@ the ``ceph orch ps`` output in the ``MEM LIMIT`` column::
436435
To exclude an OSD from memory autotuning, disable the autotune option
437436
for that OSD and also set a specific memory target. For example,
438437

439-
.. prompt:: bash #
438+
.. prompt:: bash #
440439

441-
ceph config set osd.123 osd_memory_target_autotune false
442-
ceph config set osd.123 osd_memory_target 16G
440+
ceph config set osd.123 osd_memory_target_autotune false
441+
ceph config set osd.123 osd_memory_target 16G
443442

444443

445444
.. _drivegroups:
@@ -502,7 +501,7 @@ Example
502501

503502
.. prompt:: bash [monitor.1]#
504503

505-
ceph orch apply -i /path/to/osd_spec.yml --dry-run
504+
ceph orch apply -i /path/to/osd_spec.yml --dry-run
506505

507506

508507

@@ -512,9 +511,9 @@ Filters
512511
-------
513512

514513
.. note::
515-
Filters are applied using an `AND` gate by default. This means that a drive
516-
must fulfill all filter criteria in order to get selected. This behavior can
517-
be adjusted by setting ``filter_logic: OR`` in the OSD specification.
514+
Filters are applied using an `AND` gate by default. This means that a drive
515+
must fulfill all filter criteria in order to get selected. This behavior can
516+
be adjusted by setting ``filter_logic: OR`` in the OSD specification.
518517

519518
Filters are used to assign disks to groups, using their attributes to group
520519
them.
@@ -524,7 +523,7 @@ information about the attributes with this command:
524523

525524
.. code-block:: bash
526525
527-
ceph-volume inventory </path/to/disk>
526+
ceph-volume inventory </path/to/disk>
528527
529528
Vendor or Model
530529
^^^^^^^^^^^^^^^
@@ -633,9 +632,9 @@ but want to use only the first two, you could use `limit`:
633632

634633
.. code-block:: yaml
635634
636-
data_devices:
637-
vendor: VendorA
638-
limit: 2
635+
data_devices:
636+
vendor: VendorA
637+
limit: 2
639638
640639
.. note:: `limit` is a last resort and shouldn't be used if it can be avoided.
641640

@@ -858,8 +857,8 @@ See :ref:`orchestrator-cli-placement-spec`
858857

859858
.. note::
860859

861-
Assuming each host has a unique disk layout, each OSD
862-
spec needs to have a different service id
860+
Assuming each host has a unique disk layout, each OSD
861+
spec needs to have a different service id
863862

864863

865864
Dedicated wal + db
@@ -989,7 +988,7 @@ activates all existing OSDs on a host.
989988

990989
.. prompt:: bash #
991990

992-
ceph cephadm osd activate <host>...
991+
ceph cephadm osd activate <host>...
993992

994993
This will scan all existing disks for OSDs and deploy corresponding daemons.
995994

0 commit comments

Comments
 (0)