Skip to content

Commit 4090019

Browse files
authored
Merge pull request ceph#54620 from rishabh-d-dave/mgr-vol-clone-stats
mgr/vol: show progress and stats for the subvolume snapshot clones Reviewed-by: Venky Shankar <[email protected]>
2 parents aed37cc + a6b95a5 commit 4090019

File tree

14 files changed

+1301
-85
lines changed

14 files changed

+1301
-85
lines changed

PendingReleaseNotes

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -265,6 +265,15 @@ CephFS: Disallow delegating preallocated inode ranges to clients. Config
265265
exposed via the new `--snap-id` option for `rbd clone` command.
266266
* RBD: The output of `rbd snap ls --all` command now includes the original
267267
type for trashed snapshots.
268+
* CephFS: "ceph fs clone status" command will now print statistics about clone
269+
progress in terms of how much data has been cloned (in both percentage as
270+
well as bytes) and how many files have been cloned.
271+
* CephFS: "ceph status" command will now print a progress bar when cloning is
272+
ongoing. If clone jobs are more than the cloner threads, it will print one
273+
more progress bar that shows total amount of progress made by both ongoing
274+
as well as pending clones. Both progress are accompanied by messages that
275+
show number of clone jobs in the respective categories and the amount of
276+
progress made by each of them.
268277

269278
>=18.0.0
270279

doc/cephfs/fs-volumes.rst

Lines changed: 31 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -758,16 +758,40 @@ Here is an example of an ``in-progress`` clone:
758758
::
759759

760760
{
761-
"status": {
762-
"state": "in-progress",
763-
"source": {
764-
"volume": "cephfs",
765-
"subvolume": "subvol1",
766-
"snapshot": "snap1"
767-
}
761+
"status": {
762+
"state": "in-progress",
763+
"source": {
764+
"volume": "cephfs",
765+
"subvolume": "subvol1",
766+
"snapshot": "snap1"
767+
},
768+
"progress_report": {
769+
"percentage cloned": "12.24%",
770+
"amount cloned": "376M/3.0G",
771+
"files cloned": "4/6"
768772
}
773+
}
769774
}
770775

776+
A progress report is also printed in the output when clone is ``in-progress``.
777+
Here the progress is reported only for the specific clone. For collective
778+
progress made by all ongoing clones, a progress bar is printed at the bottom
779+
in ouput of ``ceph status`` command::
780+
781+
progress:
782+
3 ongoing clones - average progress is 47.569% (10s)
783+
[=============...............] (remaining: 11s)
784+
785+
If the number of clone jobs are more than cloner threads, two progress bars
786+
are printed, one for ongoing clones (same as above) and other for all
787+
(ongoing+pending) clones::
788+
789+
progress:
790+
4 ongoing clones - average progress is 27.669% (15s)
791+
[=======.....................] (remaining: 41s)
792+
Total 5 clones - average progress is 41.667% (3s)
793+
[===========.................] (remaining: 4s)
794+
771795
.. note:: The ``failure`` section will be shown only if the clone's state is ``failed`` or ``cancelled``
772796

773797
Here is an example of a ``failed`` clone:
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
tasks:
2+
- cephfs_test_runner:
3+
fail_on_skip: false
4+
modules:
5+
- tasks.cephfs.test_volumes.TestCloneProgressReporter

qa/tasks/cephfs/mount.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -775,6 +775,10 @@ def run_shell(self, args, **kwargs):
775775

776776
return self.client_remote.run(args=args, **kwargs)
777777

778+
def get_shell_stdout(self, args, timeout=300, **kwargs):
779+
return self.run_shell(args=args, timeout=timeout, **kwargs).stdout.\
780+
getvalue().strip()
781+
778782
def run_shell_payload(self, payload, wait=True, timeout=900, **kwargs):
779783
kwargs.setdefault('cwd', self.mountpoint)
780784
kwargs.setdefault('omit_sudo', False)

0 commit comments

Comments
 (0)