Skip to content

Commit f3a47e1

Browse files
bluikkozdover23
authored andcommitted
doc/radosgw: Cosmetic improvements in dynamicresharding.rst
Make reference to config section a hyperlink. Capitalization consistency: use title case in section titles, fix two invalid capitalizations in text. Promptify CLI example commands. A JSON key-value pair is a "property" and not an "object". Use an ordered list instead of inline code with hardcoded list numbers. Use the American "canceled" (majority of occurrences in doc/) instead of "cancelled". Use admonitions instead of spelling out "Note:". Clarify language on sharding cleanup for multisite. Format JSON keys as inline code. Indent example JSON output from radosgw-admin correctly (same as real output) with 4 spaces. Use colon instead of full stop at the end of text that describes the following example command. Move admonition to after such example command. Signed-off-by: Ville Ojamo <[email protected]> (cherry picked from commit cbb9ab7)
1 parent da27661 commit f3a47e1

File tree

1 file changed

+97
-89
lines changed

1 file changed

+97
-89
lines changed

doc/radosgw/dynamicresharding.rst

Lines changed: 97 additions & 89 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ resharding process, but reads are not.
2323

2424
By default dynamic bucket index resharding can only increase the
2525
number of bucket index shards to 1999, although this upper-bound is a
26-
configuration parameter (see Configuration below). When
26+
configuration parameter (see `Configuration`_ below). When
2727
possible, the process chooses a prime number of shards in order to
2828
spread the number of entries across the bucket index
2929
shards more evenly.
@@ -43,9 +43,9 @@ buckets that fluctuate in numbers of objects.
4343
Multisite
4444
=========
4545

46-
With Ceph releases Prior to Reef, the Ceph Object Gateway (RGW) does not support
46+
With Ceph releases prior to Reef, the Ceph Object Gateway (RGW) does not support
4747
dynamic resharding in a
48-
multisite environment. For information on dynamic resharding, see
48+
multisite deployment. For information on dynamic resharding, see
4949
:ref:`Resharding <feature_resharding>` in the RGW multisite documentation.
5050

5151
Configuration
@@ -62,98 +62,106 @@ Configuration
6262
.. confval:: rgw_reshard_progress_judge_interval
6363
.. confval:: rgw_reshard_progress_judge_ratio
6464

65-
Admin commands
65+
Admin Commands
6666
==============
6767

68-
Add a bucket to the resharding queue
68+
Add a Bucket to the Resharding Queue
6969
------------------------------------
7070

71-
::
71+
.. prompt:: bash #
7272

73-
# radosgw-admin reshard add --bucket <bucket_name> --num-shards <new number of shards>
73+
radosgw-admin reshard add --bucket <bucket_name> --num-shards <new number of shards>
7474

75-
List resharding queue
75+
List Resharding Queue
7676
---------------------
7777

78-
::
78+
.. prompt:: bash #
7979

80-
# radosgw-admin reshard list
80+
radosgw-admin reshard list
8181

82-
Process tasks on the resharding queue
82+
Process Tasks on the Resharding Queue
8383
-------------------------------------
8484

85-
::
85+
.. prompt:: bash #
8686

87-
# radosgw-admin reshard process
87+
radosgw-admin reshard process
8888

89-
Bucket resharding status
89+
Bucket Resharding Status
9090
------------------------
9191

92-
::
92+
.. prompt:: bash #
9393

94-
# radosgw-admin reshard status --bucket <bucket_name>
94+
radosgw-admin reshard status --bucket <bucket_name>
9595

96-
The output is a JSON array of 3 objects (reshard_status, new_bucket_instance_id, num_shards) per shard.
96+
The output is a JSON array of 3 properties (``reshard_status``, ``new_bucket_instance_id``, ``num_shards``) per shard.
9797

9898
For example, the output at each dynamic resharding stage is shown below:
9999

100-
``1. Before resharding occurred:``
101-
::
102-
103-
[
104-
{
105-
"reshard_status": "not-resharding",
106-
"new_bucket_instance_id": "",
107-
"num_shards": -1
108-
}
109-
]
110-
111-
``2. During resharding:``
112-
::
113-
114-
[
115-
{
116-
"reshard_status": "in-progress",
117-
"new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
118-
"num_shards": 2
119-
},
120-
{
121-
"reshard_status": "in-progress",
122-
"new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
123-
"num_shards": 2
124-
}
125-
]
126-
127-
``3. After resharding completed:``
128-
::
129-
130-
[
131-
{
132-
"reshard_status": "not-resharding",
133-
"new_bucket_instance_id": "",
134-
"num_shards": -1
135-
},
136-
{
137-
"reshard_status": "not-resharding",
138-
"new_bucket_instance_id": "",
139-
"num_shards": -1
140-
}
141-
]
142-
143-
144-
Cancel pending bucket resharding
100+
#. Before resharding occurred:
101+
102+
::
103+
104+
[
105+
{
106+
"reshard_status": "not-resharding",
107+
"new_bucket_instance_id": "",
108+
"num_shards": -1
109+
}
110+
]
111+
112+
#. During resharding:
113+
114+
::
115+
116+
[
117+
{
118+
"reshard_status": "in-progress",
119+
"new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
120+
"num_shards": 2
121+
},
122+
{
123+
"reshard_status": "in-progress",
124+
"new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
125+
"num_shards": 2
126+
}
127+
]
128+
129+
#. After resharding completed:
130+
131+
::
132+
133+
[
134+
{
135+
"reshard_status": "not-resharding",
136+
"new_bucket_instance_id": "",
137+
"num_shards": -1
138+
},
139+
{
140+
"reshard_status": "not-resharding",
141+
"new_bucket_instance_id": "",
142+
"num_shards": -1
143+
}
144+
]
145+
146+
147+
Cancel Pending Bucket Resharding
145148
--------------------------------
146149

147-
Note: Bucket resharding tasks cannot be cancelled once they start executing. ::
150+
.. note::
148151

149-
# radosgw-admin reshard cancel --bucket <bucket_name>
152+
Bucket resharding tasks cannot be canceled once they transition to
153+
the ``in-progress`` state from the initial ``not-resharding`` state.
150154

151-
Manual immediate bucket resharding
155+
.. prompt:: bash #
156+
157+
radosgw-admin reshard cancel --bucket <bucket_name>
158+
159+
Manual Immediate Bucket Resharding
152160
----------------------------------
153161

154-
::
162+
.. prompt:: bash #
155163

156-
# radosgw-admin bucket reshard --bucket <bucket_name> --num-shards <new number of shards>
164+
radosgw-admin bucket reshard --bucket <bucket_name> --num-shards <new number of shards>
157165

158166
When choosing a number of shards, the administrator must anticipate each
159167
bucket's peak number of objects. Ideally one should aim for no
@@ -166,12 +174,12 @@ since the former is prime. A variety of web sites have lists of prime
166174
numbers; search for "list of prime numbers" with your favorite
167175
search engine to locate some web sites.
168176

169-
Setting a bucket's minimum number of shards
177+
Setting a Bucket's Minimum Number of Shards
170178
-------------------------------------------
171179

172-
::
180+
.. prompt:: bash #
173181

174-
# radosgw-admin bucket set-min-shards --bucket <bucket_name> --num-shards <min number of shards>
182+
radosgw-admin bucket set-min-shards --bucket <bucket_name> --num-shards <min number of shards>
175183

176184
Since dynamic resharding can now reduce the number of shards,
177185
administrators may want to prevent the number of shards from becoming
@@ -185,27 +193,28 @@ Troubleshooting
185193

186194
Clusters prior to Luminous 12.2.11 and Mimic 13.2.5 left behind stale bucket
187195
instance entries, which were not automatically cleaned up. This issue also affected
188-
LifeCycle policies, which were no longer applied to resharded buckets. Both of
189-
these issues could be worked around by running ``radosgw-admin`` commands.
196+
lifecycle policies, which were no longer applied to resharded buckets. Both of
197+
these issues can be remediated by running ``radosgw-admin`` commands.
190198

191-
Stale instance management
199+
Stale Instance Management
192200
-------------------------
193201

194-
List the stale instances in a cluster that are ready to be cleaned up.
202+
List the stale instances in a cluster that may be cleaned up:
195203

196-
::
204+
.. prompt:: bash #
197205

198-
# radosgw-admin reshard stale-instances list
206+
radosgw-admin reshard stale-instances list
199207

200-
Clean up the stale instances in a cluster. Note: cleanup of these
201-
instances should only be done on a single-site cluster.
208+
Clean up the stale instances in a cluster:
202209

203-
::
210+
.. prompt:: bash #
204211

205-
# radosgw-admin reshard stale-instances delete
212+
radosgw-admin reshard stale-instances delete
206213

214+
.. note:: Cleanup of stale instances should not be done in a multisite deployment.
207215

208-
Lifecycle fixes
216+
217+
Lifecycle Fixes
209218
---------------
210219

211220
For clusters with resharded instances, it is highly likely that the old
@@ -217,15 +226,15 @@ resharding must be fixed manually.
217226

218227
The command to do so is:
219228

220-
::
229+
.. prompt:: bash #
221230

222-
# radosgw-admin lc reshard fix --bucket {bucketname}
231+
radosgw-admin lc reshard fix --bucket {bucketname}
223232

224233

225234
If the ``--bucket`` argument is not provided, this
226235
command will try to fix lifecycle policies for all the buckets in the cluster.
227236

228-
Object Expirer fixes
237+
Object Expirer Fixes
229238
--------------------
230239

231240
Objects subject to Swift object expiration on older clusters may have
@@ -237,17 +246,16 @@ objects, ``radosgw-admin`` provides two subcommands.
237246

238247
Listing:
239248

240-
::
249+
.. prompt:: bash #
241250

242-
# radosgw-admin objects expire-stale list --bucket {bucketname}
251+
radosgw-admin objects expire-stale list --bucket {bucketname}
243252

244253
Displays a list of object names and expiration times in JSON format.
245254

246255
Deleting:
247256

248-
::
249-
250-
# radosgw-admin objects expire-stale rm --bucket {bucketname}
257+
.. prompt:: bash #
251258

259+
radosgw-admin objects expire-stale rm --bucket {bucketname}
252260

253261
Initiates deletion of such objects, displaying a list of object names, expiration times, and deletion status in JSON format.

0 commit comments

Comments
 (0)