Skip to content

Commit cfd09e2

Browse files
authored
Merge pull request ceph#63648 from zdover23/wip-doc-2025-06-03-backport-63644-to-tentacle
tentacle: doc/rados/operations: Additional improvements to placement-groups.rst Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 39ae881 + 5f8aaad commit cfd09e2

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

doc/rados/operations/placement-groups.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -391,7 +391,7 @@ The autoscaler attempts to satisfy the following conditions:
391391

392392
- The number of PG replicas per OSD should be proportional to the amount of data in the
393393
pool.
394-
- There should by default 50-100 PGs per pool, taking into account the replication
394+
- There should by default be 50-100 PGs per pool, taking into account the replication
395395
overhead or erasure-coding fan-out of each PG's replicas across OSDs.
396396

397397
Use of Placement Groups
@@ -610,7 +610,7 @@ Memory, CPU and network usage
610610
Every PG in the cluster imposes memory, network, and CPU demands upon OSDs and
611611
Monitors. These needs must be met at all times and are increased during recovery.
612612
Indeed, one of the main reasons PGs were developed was to decrease this overhead
613-
by aggregating RADOS objects into a sets of a manageable size.
613+
by aggregating RADOS objects into sets of a manageable size.
614614

615615
For this reason, limiting the number of PGs saves significant resources.
616616

0 commit comments

Comments
 (0)