Skip to content

Commit 7280a24

Browse files
authored
Merge pull request ceph#55899 from zdover23/wip-doc-2024-03-02-rados-radosgw-pgcalc
doc/rados: remove PGcalc from docs Reviewed-by: Ronen Friedman <[email protected]>
2 parents 3e302ab + ccb851d commit 7280a24

File tree

3 files changed

+15
-25
lines changed

3 files changed

+15
-25
lines changed

doc/rados/operations/placement-groups.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -641,9 +641,6 @@ pools, each with 512 PGs on 10 OSDs, the OSDs will have to handle ~50,000 PGs
641641
each. This cluster will require significantly more resources and significantly
642642
more time for peering.
643643

644-
For determining the optimal number of PGs per OSD, we recommend the `PGCalc`_
645-
tool.
646-
647644

648645
.. _setting the number of placement groups:
649646

@@ -935,4 +932,3 @@ about it entirely (if it is too new to have a previous version). To mark the
935932

936933
.. _Create a Pool: ../pools#createpool
937934
.. _Mapping PGs to OSDs: ../../../architecture#mapping-pgs-to-osds
938-
.. _pgcalc: https://old.ceph.com/pgcalc/

doc/rados/operations/pools.rst

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,15 +18,14 @@ Pools provide:
1818
<../erasure-code>`_, resilience is defined as the number of coding chunks
1919
(for example, ``m = 2`` in the default **erasure code profile**).
2020

21-
- **Placement Groups**: You can set the number of placement groups (PGs) for
22-
the pool. In a typical configuration, the target number of PGs is
23-
approximately one hundred PGs per OSD. This provides reasonable balancing
24-
without consuming excessive computing resources. When setting up multiple
25-
pools, be careful to set an appropriate number of PGs for each pool and for
26-
the cluster as a whole. Each PG belongs to a specific pool: when multiple
27-
pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
28-
in the desired PG-per-OSD target range. To calculate an appropriate number of
29-
PGs for your pools, use the `pgcalc`_ tool.
21+
- **Placement Groups**: The :ref:`autoscaler <pg-autoscaler>` sets the number
22+
of placement groups (PGs) for the pool. In a typical configuration, the
23+
target number of PGs is approximately one-hundred and fifty PGs per OSD. This
24+
provides reasonable balancing without consuming excessive computing
25+
resources. When setting up multiple pools, set an appropriate number of PGs
26+
for each pool and for the cluster as a whole. Each PG belongs to a specific
27+
pool: when multiple pools use the same OSDs, make sure that the **sum** of PG
28+
replicas per OSD is in the desired PG-per-OSD target range.
3029

3130
- **CRUSH Rules**: When data is stored in a pool, the placement of the object
3231
and its replicas (or chunks, in the case of erasure-coded pools) in your
@@ -735,8 +734,6 @@ Managing pools that are flagged with ``--bulk``
735734
===============================================
736735
See :ref:`managing_bulk_flagged_pools`.
737736

738-
739-
.. _pgcalc: https://old.ceph.com/pgcalc/
740737
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
741738
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
742739
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups

doc/radosgw/pools.rst

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,16 +11,13 @@ multiple zones.
1111
Tuning
1212
======
1313

14-
When ``radosgw`` first tries to operate on a zone pool that does not
15-
exist, it will create that pool with the default values from
16-
``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
17-
are sufficient for some pools, but others (especially those listed in
18-
``placement_pools`` for the bucket index and data) will require additional
19-
tuning. We recommend using the `Ceph Placement Group’s per Pool
20-
Calculator <https://old.ceph.com/pgcalc/>`__ to calculate a suitable number of
21-
placement groups for these pools. See
22-
`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
23-
for details on pool creation.
14+
When ``radosgw`` first tries to operate on a zone pool that does not exist, it
15+
will create that pool with the default values from ``osd pool default pg num``
16+
and ``osd pool default pgp num``. These defaults are sufficient for some pools,
17+
but others (especially those listed in ``placement_pools`` for the bucket index
18+
and data) will require additional tuning. See `Pools
19+
<http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__ for details
20+
on pool creation.
2421

2522
.. _radosgw-pool-namespaces:
2623

0 commit comments

Comments
 (0)