@@ -106,22 +106,27 @@ to be considered ``stuck`` (default: 300).
106106PGs might be stuck in any of the following states:
107107
108108**Inactive **
109+
109110 PGs are unable to process reads or writes because they are waiting for an
110111 OSD that has the most up-to-date data to return to an ``up `` state.
111112
113+
112114**Unclean **
115+
113116 PGs contain objects that have not been replicated the desired number of
114117 times. These PGs have not yet completed the process of recovering.
115118
119+
116120**Stale **
121+
117122 PGs are in an unknown state, because the OSDs that host them have not
118123 reported to the monitor cluster for a certain period of time (specified by
119124 the ``mon_osd_report_timeout `` configuration setting).
120125
121126
122- To delete a ``lost `` RADOS object or revert an object to its prior state
123- (either by reverting it to its previous version or by deleting it because it
124- was just created and has no previous version) , run the following command:
127+ To delete a ``lost `` object or revert an object to its prior state, either by
128+ reverting it to its previous version or by deleting it because it was just
129+ created and has no previous version, run the following command:
125130
126131.. prompt :: bash $
127132
@@ -168,10 +173,8 @@ To dump the OSD map, run the following command:
168173 ceph osd dump [--format {format}]
169174
170175The ``--format `` option accepts the following arguments: ``plain `` (default),
171- ``json ``, ``json-pretty ``, ``xml ``, and ``xml-pretty ``. As noted above, JSON
172- format is the recommended format for consumption by tools, scripting, and other
173- forms of automation.
174-
176+ ``json ``, ``json-pretty ``, ``xml ``, and ``xml-pretty ``. As noted above, JSON is
177+ the recommended format for tools, scripting, and other forms of automation.
175178
176179To dump the OSD map as a tree that lists one OSD per line and displays
177180information about the weights and states of the OSDs, run the following
@@ -230,7 +233,7 @@ To mark an OSD as ``lost``, run the following command:
230233.. warning ::
231234 This could result in permanent data loss. Use with caution!
232235
233- To create an OSD in the CRUSH map , run the following command:
236+ To create a new OSD , run the following command:
234237
235238.. prompt :: bash $
236239
@@ -287,47 +290,51 @@ following command:
287290
288291 ceph osd in {osd-num}
289292
290- By using the `` pause `` and `` unpause `` flags in the OSD map, you can pause or
291- unpause I/O requests. If the flags are set, then no I/O requests will be sent
292- to any OSD. If the flags are cleared, then pending I/O requests will be resent.
293- To set or clear these flags, run one of the following commands:
293+ By using the " pause flags" in the OSD map, you can pause or unpause I/O
294+ requests. If the flags are set, then no I/O requests will be sent to any OSD.
295+ When the flags are cleared, then pending I/O requests will be resent. To set or
296+ clear pause flags, run one of the following commands:
294297
295298.. prompt :: bash $
296299
297300 ceph osd pause
298301 ceph osd unpause
299302
300- You can assign an override or ``reweight `` weight value to a specific OSD
301- if the normal CRUSH distribution seems to be suboptimal. The weight of an
302- OSD helps determine the extent of its I/O requests and data storage: two
303- OSDs with the same weight will receive approximately the same number of
304- I/O requests and store approximately the same amount of data. The ``ceph
305- osd reweight `` command assigns an override weight to an OSD. The weight
306- value is in the range 0 to 1, and the command forces CRUSH to relocate a
307- certain amount (1 - `` weight ``) of the data that would otherwise be on
308- this OSD. The command does not change the weights of the buckets above
309- the OSD in the CRUSH map. Using the command is merely a corrective
310- measure: for example, if one of your OSDs is at 90% and the others are at
311- 50%, you could reduce the outlier weight to correct this imbalance. To
312- assign an override weight to a specific OSD, run the following command:
303+ You can assign an override or ``reweight `` weight value to a specific OSD if
304+ the normal CRUSH distribution seems to be suboptimal. The weight of an OSD
305+ helps determine the extent of its I/O requests and data storage: two OSDs with
306+ the same weight will receive approximately the same number of I/O requests and
307+ store approximately the same amount of data. The ``ceph osd reweight `` command
308+ assigns an override weight to an OSD. The weight value is in the range 0 to 1,
309+ and the command forces CRUSH to relocate a certain amount (1 - `` weight ``) of
310+ the data that would otherwise be on this OSD. The command does not change the
311+ weights of the buckets above the OSD in the CRUSH map. Using the command is
312+ merely a corrective measure: for example, if one of your OSDs is at 90% and the
313+ others are at 50%, you could reduce the outlier weight to correct this
314+ imbalance. To assign an override weight to a specific OSD, run the following
315+ command:
313316
314317.. prompt :: bash $
315318
316319 ceph osd reweight {osd-num} {weight}
317320
321+ .. note :: Any assigned override reweight value will conflict with the balancer.
322+ This means that if the balancer is in use, all override reweight values
323+ should be ``1.0000 `` in order to avoid suboptimal cluster behavior.
324+
318325A cluster's OSDs can be reweighted in order to maintain balance if some OSDs
319326are being disproportionately utilized. Note that override or ``reweight ``
320- weights have relative values that default to 1.00000. Their values are not
321- absolute, and these weights must be distinguished from CRUSH weights (which
322- reflect the absolute capacity of a bucket, as measured in TiB). To reweight
323- OSDs by utilization, run the following command:
327+ weights have values relative to one another that default to 1.00000; their
328+ values are not absolute, and these weights must be distinguished from CRUSH
329+ weights (which reflect the absolute capacity of a bucket, as measured in TiB).
330+ To reweight OSDs by utilization, run the following command:
324331
325332.. prompt :: bash $
326333
327334 ceph osd reweight-by-utilization [threshold [max_change [max_osds]]] [--no-increasing]
328335
329- By default, this command adjusts the override weight of OSDs that have ±20%
330- of the average utilization, but you can specify a different percentage in the
336+ By default, this command adjusts the override weight of OSDs that have ±20% of
337+ the average utilization, but you can specify a different percentage in the
331338``threshold `` argument.
332339
333340To limit the increment by which any OSD's reweight is to be changed, use the
@@ -351,17 +358,9 @@ can be useful in certain circumstances: for example, when you are hastily
351358balancing in order to remedy ``full `` or ``nearfull `` OSDs, or when there are
352359OSDs being evacuated or slowly brought into service.
353360
354- Operators of deployments that utilize Nautilus or newer (or later revisions of
355- Luminous and Mimic) and that have no pre-Luminous clients might likely instead
356- want to enable the `balancer`` module for ``ceph-mgr ``.
357-
358- .. note :: The ``balancer`` module does the work for you and achieves a more
359- uniform result, shuffling less data along the way. When enabling the
360- ``balancer `` module, you will want to converge any changed override weights
361- back to 1.00000 so that the balancer can do an optimal job. If your cluster
362- is very full, reverting these override weights before enabling the balancer
363- may cause some OSDs to become full. This means that a phased approach may
364- needed.
361+ Operators of deployments that utilize Nautilus (or later revisions of Luminous
362+ and Mimic) and that have no pre-Luminous clients might instead want to enable
363+ the `balancer`` module for ``ceph-mgr ``.
365364
366365Add/remove an IP address or CIDR range to/from the blocklist.
367366When adding to the blocklist,
0 commit comments