Skip to content

Commit ce128ed

Browse files
Merge pull request ceph#62017 from anthonyeleven/pg-target
src/common/options: Clarify mon_target_pg_per_osd in mgr.yaml.in
2 parents 431f39f + fecbb3a commit ce128ed

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

src/common/options/mgr.yaml.in

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -312,11 +312,13 @@ options:
312312
- name: mon_target_pg_per_osd
313313
type: uint
314314
level: advanced
315-
desc: Automated PG management creates this many PGs per OSD
316-
long_desc: When creating pools, the automated PG management logic will attempt to
317-
reach this target. In some circumstances, it may exceed this target, up to the
315+
desc: Target number of PG replicas per OSD
316+
long_desc: The placement group (PG) autoscaler will arrange for approximately this number of PG
317+
replicas on each OSD as shown by the PGS column of a ``ceph osd df``
318+
report. In some circumstances it may exceed this target, up to the
318319
``mon_max_pg_per_osd`` limit. Conversely, a lower number of PGs per OSD may be
319-
created if the cluster is not yet fully utilised
320+
set if the cluster is not yet fully utilized or when the sum of power-of-two
321+
per-pool ``pg_num`` values does not permit a perfect fit.
320322
default: 100
321323
min: 1
322324
# min pgs per osd for reweight-by-pg command

0 commit comments

Comments
 (0)