Skip to content

Commit 3c4cc94

Browse files
authored
Merge pull request ceph#57415 from zdover23/wip-doc-2024-05-11-cephfs-fs-volumes-1-of-x
doc/cephfs: edit fs-volumes.rst (1 of x) Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 69bd270 + 0acbb27 commit 3c4cc94

File tree

1 file changed

+44
-37
lines changed

1 file changed

+44
-37
lines changed

doc/cephfs/fs-volumes.rst

Lines changed: 44 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,11 @@ abstractions:
2020
subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`)
2121
across a set of subvolumes
2222

23-
Some possible use-cases for the export abstractions:
23+
Possible use-cases for the export abstractions:
2424

2525
* FS subvolumes used as Manila shares or CSI volumes
2626

27-
* FS subvolume groups used as Manila share groups
27+
* FS-subvolume groups used as Manila share groups
2828

2929
Requirements
3030
------------
@@ -46,9 +46,9 @@ Create a volume by running the following command:
4646

4747
ceph fs volume create <vol_name> [placement]
4848

49-
This creates a CephFS file system and its data and metadata pools. It can also
50-
deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for
51-
example Rook). See :doc:`/mgr/orchestrator`.
49+
This creates a CephFS file system and its data and metadata pools. This command
50+
can also deploy MDS daemons for the filesystem using a ceph-mgr orchestrator
51+
module (for example Rook). See :doc:`/mgr/orchestrator`.
5252

5353
``<vol_name>`` is the volume name (an arbitrary string). ``[placement]`` is an
5454
optional string that specifies the :ref:`orchestrator-cli-placement-spec` for
@@ -64,13 +64,13 @@ To remove a volume, run the following command:
6464

6565
ceph fs volume rm <vol_name> [--yes-i-really-mean-it]
6666

67-
This removes a file system and its data and metadata pools. It also tries to
68-
remove MDS daemons using the enabled ceph-mgr orchestrator module.
67+
This command removes a file system and its data and metadata pools. It also
68+
tries to remove MDS daemons using the enabled ceph-mgr orchestrator module.
6969

70-
.. note:: After volume deletion, it is recommended to restart `ceph-mgr`
71-
if a new file system is created on the same cluster and subvolume interface
72-
is being used. Please see https://tracker.ceph.com/issues/49605#note-5
73-
for more details.
70+
.. note:: After volume deletion, we recommend restarting `ceph-mgr` if a new
71+
file system is created on the same cluster and the subvolume interface is
72+
being used. See https://tracker.ceph.com/issues/49605#note-5 for more
73+
details.
7474

7575
List volumes by running the following command:
7676

@@ -86,25 +86,26 @@ Rename a volume by running the following command:
8686

8787
Renaming a volume can be an expensive operation that requires the following:
8888

89-
- Renaming the orchestrator-managed MDS service to match the <new_vol_name>.
90-
This involves launching a MDS service with ``<new_vol_name>`` and bringing
91-
down the MDS service with ``<vol_name>``.
92-
- Renaming the file system matching ``<vol_name>`` to ``<new_vol_name>``.
93-
- Changing the application tags on the data and metadata pools of the file system
94-
to ``<new_vol_name>``.
89+
- Renaming the orchestrator-managed MDS service to match the
90+
``<new_vol_name>``. This involves launching a MDS service with
91+
``<new_vol_name>`` and bringing down the MDS service with ``<vol_name>``.
92+
- Renaming the file system from ``<vol_name>`` to ``<new_vol_name>``.
93+
- Changing the application tags on the data and metadata pools of the file
94+
system to ``<new_vol_name>``.
9595
- Renaming the metadata and data pools of the file system.
9696

9797
The CephX IDs that are authorized for ``<vol_name>`` must be reauthorized for
98-
``<new_vol_name>``. Any ongoing operations of the clients using these IDs may
99-
be disrupted. Ensure that mirroring is disabled on the volume.
98+
``<new_vol_name>``. Any ongoing operations of the clients that are using these
99+
IDs may be disrupted. Ensure that mirroring is disabled on the volume.
100100

101101
To fetch the information of a CephFS volume, run the following command:
102102

103103
.. prompt:: bash #
104104

105105
ceph fs volume info vol_name [--human_readable]
106106

107-
The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB.
107+
The ``--human_readable`` flag shows used and available pool capacities in
108+
KB/MB/GB.
108109

109110
The output format is JSON and contains fields as follows:
110111

@@ -159,7 +160,7 @@ Create a subvolume group by running the following command:
159160

160161
The command succeeds even if the subvolume group already exists.
161162

162-
When creating a subvolume group you can specify its data pool layout (see
163+
When you create a subvolume group, you can specify its data pool layout (see
163164
:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and
164165
size in bytes. The size of the subvolume group is specified by setting
165166
a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group
@@ -173,11 +174,11 @@ Remove a subvolume group by running a command of the following form:
173174
ceph fs subvolumegroup rm <vol_name> <group_name> [--force]
174175

175176
The removal of a subvolume group fails if the subvolume group is not empty or
176-
is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove
177-
command" to succeed.
177+
is non-existent. The ``--force`` flag allows the command to succeed when its
178+
argument is a non-existent subvolume group.
178179

179-
180-
Fetch the absolute path of a subvolume group by running a command of the following form:
180+
Fetch the absolute path of a subvolume group by running a command of the
181+
following form:
181182

182183
.. prompt:: bash #
183184

@@ -192,17 +193,21 @@ List subvolume groups by running a command of the following form:
192193
.. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group
193194
snapshots can still be listed and deleted)
194195

195-
Fetch the metadata of a subvolume group by running a command of the following form:
196+
Fetch the metadata of a subvolume group by running a command of the following
197+
form:
196198

197199
.. prompt:: bash #
198200

199201
ceph fs subvolumegroup info <vol_name> <group_name>
200202

201203
The output format is JSON and contains fields as follows:
202204

203-
* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
204-
* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
205-
* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
205+
* ``atime``: access time of the subvolume group path in the format ``YYYY-MM-DD
206+
HH:MM:SS``
207+
* ``mtime``: modification time of the subvolume group path in the format
208+
``YYYY-MM-DD HH:MM:SS``
209+
* ``ctime``: change time of the subvolume group path in the format ``YYYY-MM-DD
210+
HH:MM:SS``
206211
* ``uid``: uid of the subvolume group path
207212
* ``gid``: gid of the subvolume group path
208213
* ``mode``: mode of the subvolume group path
@@ -213,7 +218,8 @@ The output format is JSON and contains fields as follows:
213218
* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS"
214219
* ``data_pool``: data pool to which the subvolume group belongs
215220

216-
Check the presence of any subvolume group by running a command of the following form:
221+
Check for the presence of a given subvolume group by running a command of the
222+
following form:
217223

218224
.. prompt:: bash #
219225

@@ -225,31 +231,32 @@ The ``exist`` command outputs:
225231
* "no subvolumegroup exists": if no subvolumegroup is present
226232

227233
.. note:: This command checks for the presence of custom groups and not
228-
presence of the default one. To validate the emptiness of the volume, a
229-
subvolumegroup existence check alone is not sufficient. Subvolume existence
230-
also needs to be checked as there might be subvolumes in the default group.
234+
presence of the default one. A subvolumegroup-existence check alone is not
235+
sufficient to validate the emptiness of the volume. Subvolume existence must
236+
also be checked, as there might be subvolumes in the default group.
231237

232238
Resize a subvolume group by running a command of the following form:
233239

234240
.. prompt:: bash #
235241

236242
ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]
237243

238-
The command resizes the subvolume group quota, using the size specified by
244+
This command resizes the subvolume group quota, using the size specified by
239245
``new_size``. The ``--no_shrink`` flag prevents the subvolume group from
240246
shrinking below the current used size.
241247

242248
The subvolume group may be resized to an infinite size by passing ``inf`` or
243249
``infinite`` as the ``new_size``.
244250

245-
Remove a snapshot of a subvolume group by running a command of the following form:
251+
Remove a snapshot of a subvolume group by running a command of the following
252+
form:
246253

247254
.. prompt:: bash #
248255

249256
ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]
250257

251-
Supplying the ``--force`` flag allows the command to succeed when it would otherwise
252-
fail due to the nonexistence of the snapshot.
258+
Supplying the ``--force`` flag allows the command to succeed when it would
259+
otherwise fail due to the nonexistence of the snapshot.
253260

254261
List snapshots of a subvolume group by running a command of the following form:
255262

0 commit comments

Comments
 (0)