Skip to content

Commit 4c09477

Browse files
authored
Merge pull request ceph#61503 from zdover23/wip-doc-2025-01-24-cephfs-disaster-recovery-experts
doc/cephfs: edit disaster-recovery-experts (6 of x) Reviewed-by: Anthony D'Atri <[email protected]>
2 parents d87e230 + 5670054 commit 4c09477

File tree

1 file changed

+19
-17
lines changed

1 file changed

+19
-17
lines changed

doc/cephfs/disaster-recovery-experts.rst

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -216,11 +216,11 @@ Using an alternate metadata pool for recovery
216216
This procedure has not been extensively tested. It should be undertaken only
217217
with great care.
218218

219-
If an existing file system is damaged and inoperative, then it is possible to
220-
create a fresh metadata pool and to attempt the reconstruction the of the
221-
damaged and inoperative file system's metadata into the new pool, while leaving
222-
the old metadata in place. This could be used to make a safer attempt at
223-
recovery since the existing metadata pool would not be modified.
219+
If an existing CephFS file system is damaged and inoperative, then it is
220+
possible to create a fresh metadata pool and to attempt the reconstruction the
221+
of the damaged and inoperative file system's metadata into the new pool, while
222+
leaving the old metadata in place. This could be used to make a safer attempt
223+
at recovery since the existing metadata pool would not be modified.
224224

225225
.. caution::
226226

@@ -229,9 +229,9 @@ recovery since the existing metadata pool would not be modified.
229229
contents of the data pool while this is the case. After recovery is
230230
complete, archive or delete the damaged metadata pool.
231231

232-
#. To begin, the existing file system should be taken down to prevent further
233-
modification of the data pool. Unmount all clients and then use the
234-
following command to mark the file system failed:
232+
#. Take down the existing file system in order to prevent any further
233+
modification of the data pool. Unmount all clients. When all clients have
234+
been unmounted, use the following command to mark the file system failed:
235235

236236
.. prompt:: bash #
237237

@@ -241,8 +241,11 @@ recovery since the existing metadata pool would not be modified.
241241

242242
``<fs_name>`` here and below refers to the original, damaged file system.
243243

244-
#. Next, create a recovery file system in which we will populate a new metadata
245-
pool that is backed by the original data pool:
244+
#. Create a recovery file system. This recovery file system will be used to
245+
recover the data in the damaged pool. First, the filesystem will have a data
246+
pool deployed for it. Then you will attacha new metadata pool to the new
247+
data pool. Then you will set the new metadata pool to be backed by the old
248+
data pool.
246249

247250
.. prompt:: bash #
248251

@@ -255,7 +258,7 @@ recovery since the existing metadata pool would not be modified.
255258
The ``--recover`` flag prevents any MDS daemon from joining the new file
256259
system.
257260

258-
#. Next, we will create the intial metadata for the fs:
261+
#. Create the intial metadata for the file system:
259262

260263
.. prompt:: bash #
261264

@@ -273,7 +276,7 @@ recovery since the existing metadata pool would not be modified.
273276

274277
cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force --yes-i-really-really-mean-it
275278

276-
#. Now perform the recovery of the metadata pool from the data pool:
279+
#. Use the following commands to rebuild the metadata pool from the data pool:
277280

278281
.. prompt:: bash #
279282

@@ -322,15 +325,14 @@ recovery since the existing metadata pool would not be modified.
322325
Verify that the config has not been set globally or with a local
323326
``ceph.conf`` file.
324327

325-
#. Now, allow an MDS daemon to join the recovery file system:
328+
#. Allow an MDS daemon to join the recovery file system:
326329

327330
.. prompt:: bash #
328331

329332
ceph fs set cephfs_recovery joinable true
330333

331-
#. Finally, run a forward :doc:`scrub </cephfs/scrub>` to repair recursive
332-
statistics. Ensure that you have an MDS daemon running and issue the
333-
following command:
334+
#. Run a forward :doc:`scrub </cephfs/scrub>` to repair recursive statistics.
335+
Ensure that you have an MDS daemon running and issue the following command:
334336

335337
.. prompt:: bash #
336338

@@ -351,7 +353,7 @@ recovery since the existing metadata pool would not be modified.
351353

352354
If the data pool is also corrupt, some files may not be restored because
353355
the backtrace information associated with them is lost. If any data
354-
objects are missing (due to issues like lost Placement Groups on the
356+
objects are missing (due to issues like lost placement groups on the
355357
data pool), the recovered files will contain holes in place of the
356358
missing data.
357359

0 commit comments

Comments
 (0)