Skip to content

Commit 6a68def

Browse files
committed
doc/cephadm: staggered upgrade docs
Signed-off-by: Adam King <[email protected]>
1 parent 0a46fcb commit 6a68def

File tree

1 file changed

+97
-0
lines changed

1 file changed

+97
-0
lines changed

doc/cephadm/upgrade.rst

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,3 +188,100 @@ you need. For example, the following command upgrades to a development build:
188188
ceph orch upgrade start --image quay.io/ceph-ci/ceph:recent-git-branch-name
189189

190190
For more information about available container images, see :ref:`containers`.
191+
192+
Staggered Upgrade
193+
=================
194+
195+
Some users may prefer to upgrade components in phases rather than all at once.
196+
The upgrade command, starting in 16.2.10 and 17.2.1 allows parameters
197+
to limit which daemons are upgraded by a single upgrade command. The options in
198+
include ``daemon_types``, ``services``, ``hosts`` and ``limit``. ``daemon_types``
199+
takes a comma-separated list of daemon types and will only upgrade daemons of those
200+
types. ``services`` is mutually exclusive with ``daemon_types``, only takes services
201+
of one type at a time (e.g. can't provide an OSD and RGW service at the same time), and
202+
will only upgrade daemons belonging to those services. ``hosts`` can be combined
203+
with ``daemon_types`` or ``services`` or provided on its own. The ``hosts`` parameter
204+
follows the same format as the command line options for :ref:`orchestrator-cli-placement-spec`.
205+
``limit`` takes an integer > 0 and provides a numerical limit on the number of
206+
daemons cephadm will upgrade. ``limit`` can be combined with any of the other
207+
parameters. For example, if you specify to upgrade daemons of type osd on host
208+
Host1 with ``limit`` set to 3, cephadm will upgrade (up to) 3 osd daemons on
209+
Host1.
210+
211+
Example: specifying daemon types and hosts:
212+
213+
.. prompt:: bash #
214+
215+
ceph orch upgrade start --image <image-name> --daemon-types mgr,mon --hosts host1,host2
216+
217+
Example: specifying services and using limit:
218+
219+
.. prompt:: bash #
220+
221+
ceph orch upgrade start --image <image-name> --services rgw.example1,rgw.example2 --limit 2
222+
223+
.. note::
224+
225+
Cephadm strictly enforces an order to the upgrade of daemons that is still present
226+
in staggered upgrade scenarios. The current upgrade ordering is
227+
``mgr -> mon -> crash -> osd -> mds -> rgw -> rbd-mirror -> cephfs-mirror -> iscsi -> nfs``.
228+
If you specify parameters that would upgrade daemons out of order, the upgrade
229+
command will block and note which daemons will be missed if you proceed.
230+
231+
.. note::
232+
233+
Upgrade commands with limiting parameters will validate the options before beginning the
234+
upgrade, which may require pulling the new container image. Do not be surprised
235+
if the upgrade start command takes a while to return when limiting parameters are provided.
236+
237+
.. note::
238+
239+
In staggered upgrade scenarios (when a limiting parameter is provided) monitoring
240+
stack daemons including Prometheus and node-exporter are refreshed after the Manager
241+
daemons have been upgraded. Do not be surprised if Manager upgrades thus take longer
242+
than expected. Note that the versions of monitoring stack daemons may not change between
243+
Ceph releases, in which case they are only redeployed.
244+
245+
Upgrading to a version that supports staggered upgrade from one that doesn't
246+
----------------------------------------------------------------------------
247+
248+
While upgrading from a version that already supports staggered upgrades the process
249+
simply requires providing the necessary arguments. However, if you wish to upgrade
250+
to a version that supports staggered upgrade from one that does not, there is a
251+
workaround. It requires first manually upgrading the Manager daemons and then passing
252+
the limiting parameters as usual.
253+
254+
.. warning::
255+
Make sure you have multiple running mgr daemons before attempting this procedure.
256+
257+
To start with, determine which Manager is your active one and which are standby. This
258+
can be done in a variety of ways such as looking at the ``ceph -s`` output. Then,
259+
manually upgrade each standby mgr daemon with:
260+
261+
.. prompt:: bash #
262+
263+
ceph orch daemon redeploy mgr.example1.abcdef --image <new-image-name>
264+
265+
.. note::
266+
267+
If you are on a very early version of cephadm (early Octopus) the ``orch daemon redeploy``
268+
command may not have the ``--image`` flag. In that case, you must manually set the
269+
Manager container image ``ceph config set mgr container_image <new-image-name>`` and then
270+
redeploy the Manager ``ceph orch daemon redeploy mgr.example1.abcdef``
271+
272+
At this point, a Manager fail over should allow us to have the active Manager be one
273+
running the new version.
274+
275+
.. prompt:: bash #
276+
277+
ceph mgr fail
278+
279+
Verify the active Manager is now one running the new version. To complete the Manager
280+
upgrading:
281+
282+
.. prompt:: bash #
283+
284+
ceph orch upgrade start --image <new-image-name> --daemon-types mgr
285+
286+
You should now have all your Manager daemons on the new version and be able to
287+
specify the limiting parameters for the rest of the upgrade.

0 commit comments

Comments
 (0)