@@ -103,31 +103,31 @@ If you know that you need also to reset the other tables, then replace
103103MDS map reset
104104-------------
105105
106- Once the in-RADOS state of the file system (i.e. contents of the metadata pool)
107- is somewhat recovered, it may be necessary to update the MDS map to reflect
108- the contents of the metadata pool. Use the following command to reset the MDS
109- map to a single MDS daemon :
106+ When the in-RADOS state of the file system (that is, the contents of the
107+ metadata pool) has been somewhat recovered, it may be necessary to update the
108+ MDS map to reflect the new state of the metadata pool. Use the following
109+ command to reset the MDS map to a single MDS:
110110
111- ::
111+ .. prompt :: bash #
112112
113- ceph fs reset <fs name> --yes-i-really-mean-it
113+ ceph fs reset <fs name> --yes-i-really-mean-it
114114
115- Once this is run, any in-RADOS state for MDS ranks other than 0 will be ignored:
116- as a result it is possible for this to result in data loss.
115+ After this command has been run, any in-RADOS state for MDS ranks other than
116+ ``0 `` will be ignored. This means that running this command can result in data
117+ loss.
117118
118- One might wonder what the difference is between ' fs reset' and 'fs remove; fs
119- new'. The key distinction is that doing a remove/new will leave rank 0 in
120- 'creating' state, such that it would overwrite any existing root inode on disk
121- and orphan any existing files. In contrast, the 'reset' command will leave
122- rank 0 in 'active' state such that the next MDS daemon to claim the rank will
123- go ahead and use the existing in-RADOS metadata .
119+ There is a difference between the effects of the `` fs reset `` command and the
120+ `` fs remove `` command. The `` fs reset `` command leaves rank `` 0 `` in the
121+ `` active `` state so that the next MDS daemon to claim the rank uses the
122+ existing in-RADOS metadata. The `` fs remove `` command leaves rank `` 0 `` in the
123+ `` creating `` state, which means that existing root inodes on disk will be
124+ overwritten. Running the `` fs remove `` command will orphan any existing files .
124125
125126Recovery from missing metadata objects
126127--------------------------------------
127128
128- Depending on what objects are missing or corrupt, you may need to
129- run various commands to regenerate default versions of the
130- objects.
129+ Depending on which objects are missing or corrupt, you may need to run
130+ additional commands to regenerate default versions of the objects.
131131
132132::
133133
@@ -143,28 +143,30 @@ objects.
143143 cephfs-data-scan init
144144
145145Finally, you can regenerate metadata objects for missing files
146- and directories based on the contents of a data pool. This is
147- a three-phase process. First, scanning *all * objects to calculate
148- size and mtime metadata for inodes. Second, scanning the first
149- object from every file to collect this metadata and inject it into
150- the metadata pool. Third, checking inode linkages and fixing found
151- errors.
146+ and directories based on the contents of a data pool. This is
147+ a three-phase process:
148+
149+ #. Scanning *all * objects to calculate size and mtime metadata for inodes.
150+ #. Scanning the first object from every file to collect this metadata and
151+ inject it into the metadata pool.
152+ #. Checking inode linkages and fixing found errors.
152153
153154::
154155
155156 cephfs-data-scan scan_extents [<data pool> [<extra data pool> ...]]
156157 cephfs-data-scan scan_inodes [<data pool>]
157158 cephfs-data-scan scan_links
158159
159- ' scan_extents' and ' scan_inodes' commands may take a *very long * time
160- if there are many files or very large files in the data pool .
160+ `` scan_extents `` and `` scan_inodes `` commands may take a *very long * time if
161+ the data pool contains many files or very large files.
161162
162- To accelerate the process, run multiple instances of the tool.
163+ To accelerate the process of running ``scan_extents `` or ``scan_inodes ``, run
164+ multiple instances of the tool:
163165
164166Decide on a number of workers, and pass each worker a number within
165- the range 0-(worker_m - 1).
167+ the range `` 0-(worker_m - 1) `` (that is, 'zero to "worker_m" minus 1' ).
166168
167- The example below shows how to run 4 workers simultaneously:
169+ The example below shows how to run four workers simultaneously:
168170
169171::
170172
@@ -187,20 +189,23 @@ The example below shows how to run 4 workers simultaneously:
187189 cephfs-data-scan scan_inodes --worker_n 3 --worker_m 4
188190
189191It is **important ** to ensure that all workers have completed the
190- scan_extents phase before any workers enter the scan_inodes phase.
192+ `` scan_extents `` phase before any worker enters the `` scan_inodes phase `` .
191193
192- After completing the metadata recovery, you may want to run cleanup
193- operation to delete ancillary data generated during recovery.
194+ After completing the metadata recovery process , you may want to run a cleanup
195+ operation to delete ancillary data generated during recovery. Use a command of the following form to run a cleanup operation:
194196
195- ::
197+ .. prompt :: bash #
198+
199+ cephfs-data-scan cleanup [<data pool>]
196200
197- cephfs-data-scan cleanup [<data pool>]
201+ .. note ::
198202
199- Note, the data pool parameters for 'scan_extents', 'scan_inodes' and
200- 'cleanup' commands are optional, and usually the tool will be able to
201- detect the pools automatically. Still you may override this. The
202- 'scan_extents' command needs all data pools to be specified, while
203- 'scan_inodes' and 'cleanup' commands need only the main data pool.
203+ The data pool parameters for ``scan_extents ``, ``scan_inodes `` and
204+ ``cleanup `` commands are optional, and usually the tool will be able to
205+ detect the pools automatically. Still, you may override this. The
206+ ``scan_extents `` command requires that all data pools be specified, but the
207+ ``scan_inodes `` and ``cleanup `` commands require only that you specify the
208+ main data pool.
204209
205210
206211Using an alternate metadata pool for recovery
0 commit comments