Skip to content

Commit 98618aa

Browse files
doc/ceph-volume: add spillover fix procedure
Add a procedure that explains how, after an upgrade, to move bytes that have spilled over to a relatively slow device back to the faster device. This procedure was developed by Chris Dunlop on the [ceph-users] mailing list, here: https://lists.ceph.io/hyperkitty/list/[email protected]/message/POPUFSZGXR3P2RPYPJ4WJ4HGHZ3QESF6/ Eugen Block requested the addition of this procedure to the documentation on 30 Aug 2024. Co-authored-by: Anthony D'Atri <[email protected]> Signed-off-by: Zac Dover <[email protected]>
1 parent aed37cc commit 98618aa

File tree

1 file changed

+45
-0
lines changed

1 file changed

+45
-0
lines changed

doc/ceph-volume/lvm/newdb.rst

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,48 @@ Logical volume name format is vg/lv. Fails if OSD has already got attached DB.
99
Attach vgname/lvname as a DB volume to OSD 1::
1010

1111
ceph-volume lvm new-db --osd-id 1 --osd-fsid 55BD4219-16A7-4037-BC20-0F158EFCC83D --target vgname/new_db
12+
13+
Reversing BlueFS Spillover to Slow Devices
14+
------------------------------------------
15+
16+
Under certain circumstances, OSD RocksDB databases spill onto slow storage and
17+
the Ceph cluster returns specifics regarding BlueFS spillover warnings. ``ceph
18+
health detail`` returns these spillover warnings. Here is an example of a
19+
spillover warning::
20+
21+
osd.76 spilled over 128 KiB metadata from 'db' device (56 GiB used of 60 GiB) to slow device
22+
23+
To move this DB metadata from the slower device to the faster device, take the
24+
following steps:
25+
26+
#. Expand the database's logical volume (LV):
27+
28+
.. prompt:: bash #
29+
30+
lvextend -l ${size} ${lv}/${db} ${ssd_dev}
31+
32+
#. Stop the OSD:
33+
34+
.. prompt:: bash #
35+
36+
cephadm unit --fsid $cid --name osd.${osd} stop
37+
38+
#. Run the ``bluefs-bdev-expand`` command:
39+
40+
.. prompt:: bash #
41+
42+
cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${osd}
43+
44+
#. Run the ``bluefs-bdev-migrate`` command:
45+
46+
.. prompt:: bash #
47+
48+
cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph-${osd} --devs-source /var/lib/ceph/osd/ceph-${osd}/block --dev-target /var/lib/ceph/osd/ceph-${osd}/block.db
49+
50+
#. Restart the OSD:
51+
52+
.. prompt:: bash #
53+
54+
cephadm unit --fsid $cid --name osd.${osd} start
55+
56+
.. note:: *The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here:* `[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14) <https://lists.ceph.io/hyperkitty/list/[email protected]/message/POPUFSZGXR3P2RPYPJ4WJ4HGHZ3QESF6/>`_

0 commit comments

Comments
 (0)