@@ -388,28 +388,74 @@ following command:
388388
389389The output of ``ceph df `` resembles the following::
390390
391+ --- RAW STORAGE ---
391392 CLASS SIZE AVAIL USED RAW USED %RAW USED
392- ssd 202 GiB 200 GiB 2.0 GiB 2.0 GiB 1.00
393- TOTAL 202 GiB 200 GiB 2.0 GiB 2.0 GiB 1.00
394-
393+ hdd 5.4 PiB 1.2 PiB 4.3 PiB 4.3 PiB 78.58
394+ ssd 22 TiB 19 TiB 2.7 TiB 2.7 TiB 12.36
395+ TOTAL 5.5 PiB 1.2 PiB 4.3 PiB 4.3 PiB 78.32
396+
395397 --- POOLS ---
396- POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
397- device_health_metrics 1 1 242 KiB 15 KiB 227 KiB 4 251 KiB 24 KiB 227 KiB 0 297 GiB N/A N/A 4 0 B 0 B
398- cephfs.a.meta 2 32 6.8 KiB 6.8 KiB 0 B 22 96 KiB 96 KiB 0 B 0 297 GiB N/A N/A 22 0 B 0 B
399- cephfs.a.data 3 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 99 GiB N/A N/A 0 0 B 0 B
400- test 4 32 22 MiB 22 MiB 50 KiB 248 19 MiB 19 MiB 50 KiB 0 297 GiB N/A N/A 248 0 B 0 B
398+ POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
399+ .mgr 11 1 558 MiB 141 1.6 GiB 0 5.8 TiB
400+ cephfs_meta 13 1024 166 GiB 14.59M 499 GiB 2.74 5.8 TiB
401+ cephfs_data 14 1024 0 B 1.17G 0 B 0 5.8 TiB
402+ cephfsECvol 19 2048 2.8 PiB 1.81G 3.5 PiB 83.79 561 TiB
403+ .nfs 20 32 9.7 KiB 61 118 KiB 0 5.8 TiB
404+ testbench 71 32 12 GiB 3.14k 37 GiB 0 234 TiB
405+ default.rgw.buckets.data 76 2048 482 TiB 132.09M 643 TiB 47.85 526 TiB
406+ .rgw.root 97 1 1.4 KiB 4 48 KiB 0 5.8 TiB
407+ default.rgw.log 98 256 3.6 KiB 209 408 KiB 0 5.8 TiB
408+ default.rgw.control 99 1 0 B 8 0 B 0 5.8 TiB
409+ default.rgw.meta 100 128 3.8 KiB 20 194 KiB 0 5.8 TiB
410+ default.rgw.buckets.index 101 256 4.2 MiB 33 13 MiB 0 5.8 TiB
411+ default.rgw.buckets.non-ec 102 128 5.6 MiB 13 17 MiB 0 5.8 TiB
412+ kubedata 104 256 63 GiB 17.65k 188 GiB 0.03 234 TiB
413+ kubemeta 105 256 241 MiB 166 724 MiB 0 5.8 TiB
401414
402- - **CLASS: ** For example, " ssd" or " hdd" .
415+ - **CLASS: ** Statistics for each CRUSH device class present, for example, `` ssd `` and `` hdd `` .
403416- **SIZE: ** The amount of storage capacity managed by the cluster.
404417- **AVAIL: ** The amount of free space available in the cluster.
405418- **USED: ** The amount of raw storage consumed by user data (excluding
406419 BlueStore's database).
407420- **RAW USED: ** The amount of raw storage consumed by user data, internal
408421 overhead, and reserved capacity.
409422- **%RAW USED: ** The percentage of raw storage used. Watch this number in
410- conjunction with ``full ratio `` and ``near full ratio `` to be forewarned when
423+ conjunction with ``backfillfull ratio `` and ``near full ratio `` to be forewarned when
411424 your cluster approaches the fullness thresholds. See `Storage Capacity `_.
412425
426+ Additional information may be displayed by invoking as below:
427+
428+ .. prompt :: bash $
429+
430+ ceph df detail
431+
432+ The output now resembles the below example::
433+
434+ --- RAW STORAGE ---
435+ CLASS SIZE AVAIL USED RAW USED %RAW USED
436+ hdd 5.4 PiB 1.2 PiB 4.3 PiB 4.3 PiB 78.58
437+ ssd 22 TiB 19 TiB 2.7 TiB 2.7 TiB 12.36
438+ TOTAL 5.5 PiB 1.2 PiB 4.3 PiB 4.3 PiB 78.32
439+
440+ --- POOLS ---
441+ POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
442+ .mgr 11 1 558 MiB 558 MiB 0 B 141 1.6 GiB 1.6 GiB 0 B 0 5.8 TiB N/A N/A N/A 0 B 0 B
443+ cephfs_meta 13 1024 166 GiB 206 MiB 166 GiB 14.59M 499 GiB 618 MiB 498 GiB 2.74 5.8 TiB N/A N/A N/A 0 B 0 B
444+ cephfs_data 14 1024 0 B 0 B 0 B 1.17G 0 B 0 B 0 B 0 5.8 TiB N/A N/A N/A 0 B 0 B
445+ cephfsECvol 19 2048 2.8 PiB 2.8 PiB 17 KiB 1.81G 3.5 PiB 3.5 PiB 21 KiB 83.79 561 TiB N/A N/A N/A 0 B 0 B
446+ .nfs 20 32 9.7 KiB 2.2 KiB 7.5 KiB 61 118 KiB 96 KiB 22 KiB 0 5.8 TiB N/A N/A N/A 0 B 0 B
447+ testbench 71 32 12 GiB 12 GiB 2.3 KiB 3.14k 37 GiB 37 GiB 6.9 KiB 0 234 TiB N/A N/A N/A 0 B 0 B
448+ default.rgw.buckets.data 76 2048 482 TiB 482 TiB 0 B 132.09M 643 TiB 643 TiB 0 B 47.85 526 TiB N/A N/A N/A 312 MiB 623 MiB
449+ .rgw.root 97 1 1.4 KiB 1.4 KiB 0 B 4 48 KiB 48 KiB 0 B 0 5.8 TiB N/A N/A N/A 0 B 0 B
450+ default.rgw.log 98 256 3.6 KiB 3.6 KiB 0 B 209 408 KiB 408 KiB 0 B 0 5.8 TiB N/A N/A N/A 0 B 0 B
451+ default.rgw.control 99 1 0 B 0 B 0 B 8 0 B 0 B 0 B 0 5.8 TiB N/A N/A N/A 0 B 0 B
452+ default.rgw.meta 100 128 3.8 KiB 3.2 KiB 671 B 20 194 KiB 192 KiB 2.0 KiB 0 5.8 TiB N/A N/A N/A 0 B 0 B
453+ default.rgw.buckets.index 101 256 4.2 MiB 0 B 4.2 MiB 33 13 MiB 0 B 13 MiB 0 5.8 TiB N/A N/A N/A 0 B 0 B
454+ default.rgw.buckets.non-ec 102 128 5.6 MiB 0 B 5.6 MiB 13 17 MiB 0 B 17 MiB 0 5.8 TiB N/A N/A N/A 0 B 0 B
455+ kubedata 104 256 63 GiB 63 GiB 0 B 17.65k 188 GiB 188 GiB 0 B 0.03 234 TiB N/A 20 TiB N/A 0 B 0 B
456+ kubemeta 105 256 241 MiB 241 MiB 278 KiB 166 723 MiB 722 MiB 833 KiB 0 5.8 TiB N/A N/A N/A 0 B 0 B
457+
458+
413459
414460**POOLS: **
415461
@@ -815,4 +861,4 @@ pool if needed with a command of the following form:
815861
816862 ceph osd pool clear-availability-status <pool-name>
817863
818- Note: Clearing a score is not allowed if the feature itself is disabled.
864+ Note: Clearing a score is not allowed if the feature itself is disabled.
0 commit comments