Skip to content

Commit ac5f785

Browse files
authored
Merge pull request ceph#61626 from zdover23/wip-doc-2025-02-03-rados-ops-pgs
doc/rados: improve pg_num/pgp_num info Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 842fb60 + c43e733 commit ac5f785

File tree

1 file changed

+17
-11
lines changed

1 file changed

+17
-11
lines changed

doc/rados/operations/placement-groups.rst

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -659,22 +659,28 @@ command of the following form:
659659

660660
ceph osd pool set {pool-name} pg_num {pg_num}
661661

662-
If you increase the number of PGs, your cluster will not rebalance until you
663-
increase the number of PGs for placement (``pgp_num``). The ``pgp_num``
664-
parameter specifies the number of PGs that are to be considered for placement
665-
by the CRUSH algorithm. Increasing ``pg_num`` splits the PGs in your cluster,
666-
but data will not be migrated to the newer PGs until ``pgp_num`` is increased.
667-
The ``pgp_num`` parameter should be equal to the ``pg_num`` parameter. To
668-
increase the number of PGs for placement, run a command of the following form:
662+
Since the Nautilus release, Ceph automatically steps ``pgp_num`` for a pool
663+
whenever ``pg_num`` is changed, either by the PG autoscaler or manually. Admins
664+
generally do not need to touch ``pgp_num`` directly, but can monitor progress
665+
with ``watch ceph osd pool ls detail``. When ``pg_num`` is changed, the value
666+
of ``pgp_num`` is stepped slowly so that the cost of splitting or merging PGs
667+
is amortized over time to minimize performance impact.
668+
669+
Increasing ``pg_num`` splits the PGs in your cluster, but data will not be
670+
migrated to the newer PGs until ``pgp_num`` is increased.
671+
672+
It is possible to manually set the ``pgp_num`` parameter. The ``pgp_num``
673+
parameter should be equal to the ``pg_num`` parameter. To increase the number
674+
of PGs for placement, run a command of the following form:
669675

670676
.. prompt:: bash #
671677

672678
ceph osd pool set {pool-name} pgp_num {pgp_num}
673679

674-
If you decrease the number of PGs, then ``pgp_num`` is adjusted automatically.
675-
In releases of Ceph that are Nautilus and later (inclusive), when the
676-
``pg_autoscaler`` is not used, ``pgp_num`` is automatically stepped to match
677-
``pg_num``. This process manifests as periods of remapping of PGs and of
680+
If you decrease or increase the number of PGs, then ``pgp_num`` is adjusted
681+
automatically. In releases of Ceph that are Nautilus and later (inclusive),
682+
when the ``pg_autoscaler`` is not used, ``pgp_num`` is automatically stepped to
683+
match ``pg_num``. This process manifests as periods of remapping of PGs and of
678684
backfill, and is expected behavior and normal.
679685

680686
.. _rados_ops_pgs_get_pg_num:

0 commit comments

Comments
 (0)