Skip to content

Commit 54cdc1d

Browse files
authored
Merge pull request ceph#45786 from adk3798/staggered-upgrade
mgr/cephadm: staggered upgrade Reviewed-by: Anthony D'Atri <[email protected]> Reviewed-by: Redouane Kachach <[email protected]>
2 parents c6e5724 + 6a68def commit 54cdc1d

File tree

12 files changed

+931
-261
lines changed

12 files changed

+931
-261
lines changed

doc/cephadm/upgrade.rst

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,3 +188,100 @@ you need. For example, the following command upgrades to a development build:
188188
ceph orch upgrade start --image quay.io/ceph-ci/ceph:recent-git-branch-name
189189

190190
For more information about available container images, see :ref:`containers`.
191+
192+
Staggered Upgrade
193+
=================
194+
195+
Some users may prefer to upgrade components in phases rather than all at once.
196+
The upgrade command, starting in 16.2.10 and 17.2.1 allows parameters
197+
to limit which daemons are upgraded by a single upgrade command. The options in
198+
include ``daemon_types``, ``services``, ``hosts`` and ``limit``. ``daemon_types``
199+
takes a comma-separated list of daemon types and will only upgrade daemons of those
200+
types. ``services`` is mutually exclusive with ``daemon_types``, only takes services
201+
of one type at a time (e.g. can't provide an OSD and RGW service at the same time), and
202+
will only upgrade daemons belonging to those services. ``hosts`` can be combined
203+
with ``daemon_types`` or ``services`` or provided on its own. The ``hosts`` parameter
204+
follows the same format as the command line options for :ref:`orchestrator-cli-placement-spec`.
205+
``limit`` takes an integer > 0 and provides a numerical limit on the number of
206+
daemons cephadm will upgrade. ``limit`` can be combined with any of the other
207+
parameters. For example, if you specify to upgrade daemons of type osd on host
208+
Host1 with ``limit`` set to 3, cephadm will upgrade (up to) 3 osd daemons on
209+
Host1.
210+
211+
Example: specifying daemon types and hosts:
212+
213+
.. prompt:: bash #
214+
215+
ceph orch upgrade start --image <image-name> --daemon-types mgr,mon --hosts host1,host2
216+
217+
Example: specifying services and using limit:
218+
219+
.. prompt:: bash #
220+
221+
ceph orch upgrade start --image <image-name> --services rgw.example1,rgw.example2 --limit 2
222+
223+
.. note::
224+
225+
Cephadm strictly enforces an order to the upgrade of daemons that is still present
226+
in staggered upgrade scenarios. The current upgrade ordering is
227+
``mgr -> mon -> crash -> osd -> mds -> rgw -> rbd-mirror -> cephfs-mirror -> iscsi -> nfs``.
228+
If you specify parameters that would upgrade daemons out of order, the upgrade
229+
command will block and note which daemons will be missed if you proceed.
230+
231+
.. note::
232+
233+
Upgrade commands with limiting parameters will validate the options before beginning the
234+
upgrade, which may require pulling the new container image. Do not be surprised
235+
if the upgrade start command takes a while to return when limiting parameters are provided.
236+
237+
.. note::
238+
239+
In staggered upgrade scenarios (when a limiting parameter is provided) monitoring
240+
stack daemons including Prometheus and node-exporter are refreshed after the Manager
241+
daemons have been upgraded. Do not be surprised if Manager upgrades thus take longer
242+
than expected. Note that the versions of monitoring stack daemons may not change between
243+
Ceph releases, in which case they are only redeployed.
244+
245+
Upgrading to a version that supports staggered upgrade from one that doesn't
246+
----------------------------------------------------------------------------
247+
248+
While upgrading from a version that already supports staggered upgrades the process
249+
simply requires providing the necessary arguments. However, if you wish to upgrade
250+
to a version that supports staggered upgrade from one that does not, there is a
251+
workaround. It requires first manually upgrading the Manager daemons and then passing
252+
the limiting parameters as usual.
253+
254+
.. warning::
255+
Make sure you have multiple running mgr daemons before attempting this procedure.
256+
257+
To start with, determine which Manager is your active one and which are standby. This
258+
can be done in a variety of ways such as looking at the ``ceph -s`` output. Then,
259+
manually upgrade each standby mgr daemon with:
260+
261+
.. prompt:: bash #
262+
263+
ceph orch daemon redeploy mgr.example1.abcdef --image <new-image-name>
264+
265+
.. note::
266+
267+
If you are on a very early version of cephadm (early Octopus) the ``orch daemon redeploy``
268+
command may not have the ``--image`` flag. In that case, you must manually set the
269+
Manager container image ``ceph config set mgr container_image <new-image-name>`` and then
270+
redeploy the Manager ``ceph orch daemon redeploy mgr.example1.abcdef``
271+
272+
At this point, a Manager fail over should allow us to have the active Manager be one
273+
running the new version.
274+
275+
.. prompt:: bash #
276+
277+
ceph mgr fail
278+
279+
Verify the active Manager is now one running the new version. To complete the Manager
280+
upgrading:
281+
282+
.. prompt:: bash #
283+
284+
ceph orch upgrade start --image <new-image-name> --daemon-types mgr
285+
286+
You should now have all your Manager daemons on the new version and be able to
287+
specify the limiting parameters for the rest of the upgrade.
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../.qa/
File renamed without changes.
Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
tasks:
2+
- cephadm.shell:
3+
env: [sha1]
4+
mon.a:
5+
- radosgw-admin realm create --rgw-realm=r --default
6+
- radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
7+
- radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default
8+
- radosgw-admin period update --rgw-realm=r --commit
9+
- ceph orch apply rgw r z --placement=2 --port=8000
10+
- sleep 180
11+
- ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force
12+
- ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force
13+
- ceph config set global log_to_journald false --force
14+
# get some good info on the state of things pre-upgrade. Useful for debugging
15+
- ceph orch ps
16+
- ceph versions
17+
- ceph -s
18+
- ceph orch ls
19+
# doing staggered upgrade requires mgr daemons being on a version that contains the staggered upgrade code
20+
# until there is a stable version that contains it, we can test by manually upgrading a mgr daemon
21+
- ceph config set mgr container_image quay.ceph.io/ceph-ci/ceph:$sha1
22+
- ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)"
23+
- ceph orch ps --refresh
24+
- sleep 180
25+
# gather more possible debugging info
26+
- ceph orch ps
27+
- ceph versions
28+
- ceph -s
29+
# check that there are two different versions found for mgr daemon (which implies we upgraded one)
30+
- ceph versions | jq -e '.mgr | length == 2'
31+
- ceph mgr fail
32+
- sleep 180
33+
# now try upgrading the other mgr
34+
# we should now have access to --image flag for the daemon redeploy command
35+
- ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1
36+
- ceph orch ps --refresh
37+
- sleep 180
38+
# gather more possible debugging info
39+
- ceph orch ps
40+
- ceph versions
41+
- ceph -s
42+
- ceph mgr fail
43+
- sleep 180
44+
# gather more debugging info
45+
- ceph orch ps
46+
- ceph versions
47+
- ceph -s
48+
# now that both mgrs should have been redeployed with the new version, we should be back on only 1 version for the mgrs
49+
- ceph versions | jq -e '.mgr | length == 1'
50+
- ceph mgr fail
51+
- sleep 180
52+
# debugging info
53+
- ceph orch ps
54+
- ceph versions
55+
# to make sure mgr daemons upgrade is fully completed, including being deployed by a mgr on new new version
56+
# also serves as an early failure if manually upgrading the mgrs failed as --daemon-types won't be recognized
57+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr
58+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
59+
# verify only one version found for mgrs and that their version hash matches what we are upgrading to
60+
- ceph versions | jq -e '.mgr | length == 1'
61+
- ceph versions | jq -e '.mgr | keys' | grep $sha1
62+
# verify overall we still se two versions, basically to make sure --daemon-types wans't ignored and all daemons upgraded
63+
- ceph versions | jq -e '.overall | length == 2'
64+
# check that exactly two daemons have been upgraded to the new image (our 2 mgr daemons)
65+
- ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 2'
66+
# upgrade only the mons on one of the two hosts
67+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk '{print $2}')
68+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
69+
- ceph orch ps
70+
# verify tow different version seen for mons
71+
- ceph versions | jq -e '.mon | length == 2'
72+
# upgrade mons on the other hosts
73+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.y | awk '{print $2}')
74+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
75+
- ceph orch ps
76+
# verify all mons now on same version and version hash matches what we are upgrading to
77+
- ceph versions | jq -e '.mon | length == 1'
78+
- ceph versions | jq -e '.mon | keys' | grep $sha1
79+
# verify exactly 5 daemons are now upgraded (2 mgrs, 3 mons)
80+
- ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 5'
81+
# upgrade exactly 2 osd daemons
82+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types osd --limit 2
83+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
84+
- ceph orch ps
85+
# verify two different versions now seen for osds
86+
- ceph versions | jq -e '.osd | length == 2'
87+
# verify exactly 7 daemons have been upgraded (2 mgrs, 3 mons, 2 osds)
88+
- ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 7'
89+
# upgrade one more osd
90+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd --limit 1
91+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
92+
- ceph orch ps
93+
- ceph versions | jq -e '.osd | length == 2'
94+
# verify now 8 daemons have been upgraded
95+
- ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 8'
96+
# upgrade the rest of the osds
97+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd
98+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
99+
- ceph orch ps
100+
# verify all osds are now on same version and version hash matches what we are upgrading to
101+
- ceph versions | jq -e '.osd | length == 1'
102+
- ceph versions | jq -e '.osd | keys' | grep $sha1
103+
# upgrade the rgw daemons using --services
104+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --services rgw.r.z
105+
- while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done
106+
- ceph orch ps
107+
# verify all rgw daemons on same version and version hash matches what we are upgrading to
108+
- ceph versions | jq -e '.rgw | length == 1'
109+
- ceph versions | jq -e '.rgw | keys' | grep $sha1
110+
# run upgrade one more time with no filter parameters to make sure anything left gets upgraded
111+
- ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1

src/pybind/mgr/cephadm/module.py

Lines changed: 30 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
from .upgrade import CephadmUpgrade
6161
from .template import TemplateMgr
6262
from .utils import CEPH_IMAGE_TYPES, RESCHEDULE_FROM_OFFLINE_HOSTS_TYPES, forall_hosts, \
63-
cephadmNoImage
63+
cephadmNoImage, CEPH_UPGRADE_ORDER
6464
from .configchecks import CephadmConfigChecks
6565
from .offline_watcher import OfflineHostWatcher
6666

@@ -2692,10 +2692,37 @@ def upgrade_ls(self, image: Optional[str], tags: bool, show_all_versions: Option
26922692
return self.upgrade.upgrade_ls(image, tags, show_all_versions)
26932693

26942694
@handle_orch_error
2695-
def upgrade_start(self, image: str, version: str) -> str:
2695+
def upgrade_start(self, image: str, version: str, daemon_types: Optional[List[str]] = None, host_placement: Optional[str] = None,
2696+
services: Optional[List[str]] = None, limit: Optional[int] = None) -> str:
26962697
if self.inventory.get_host_with_state("maintenance"):
26972698
raise OrchestratorError("upgrade aborted - you have host(s) in maintenance state")
2698-
return self.upgrade.upgrade_start(image, version)
2699+
if daemon_types is not None and services is not None:
2700+
raise OrchestratorError('--daemon-types and --services are mutually exclusive')
2701+
if daemon_types is not None:
2702+
for dtype in daemon_types:
2703+
if dtype not in CEPH_UPGRADE_ORDER:
2704+
raise OrchestratorError(f'Upgrade aborted - Got unexpected daemon type "{dtype}".\n'
2705+
f'Viable daemon types for this command are: {utils.CEPH_TYPES + utils.GATEWAY_TYPES}')
2706+
if services is not None:
2707+
for service in services:
2708+
if service not in self.spec_store:
2709+
raise OrchestratorError(f'Upgrade aborted - Got unknown service name "{service}".\n'
2710+
f'Known services are: {self.spec_store.all_specs.keys()}')
2711+
hosts: Optional[List[str]] = None
2712+
if host_placement is not None:
2713+
all_hosts = list(self.inventory.all_specs())
2714+
placement = PlacementSpec.from_string(host_placement)
2715+
hosts = placement.filter_matching_hostspecs(all_hosts)
2716+
if not hosts:
2717+
raise OrchestratorError(
2718+
f'Upgrade aborted - hosts parameter "{host_placement}" provided did not match any hosts')
2719+
2720+
if limit is not None:
2721+
if limit < 1:
2722+
raise OrchestratorError(
2723+
f'Upgrade aborted - --limit arg must be a positive integer, not {limit}')
2724+
2725+
return self.upgrade.upgrade_start(image, version, daemon_types, hosts, services, limit)
26992726

27002727
@handle_orch_error
27012728
def upgrade_pause(self) -> str:

src/pybind/mgr/cephadm/services/osd.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -298,7 +298,8 @@ def generate_previews(self, osdspecs: List[DriveGroupSpec], for_host: str) -> Li
298298

299299
# driveselection for host
300300
cmds: List[str] = self.driveselection_to_ceph_volume(ds,
301-
osd_id_claims.filtered_by_host(host),
301+
osd_id_claims.filtered_by_host(
302+
host),
302303
preview=True)
303304
if not cmds:
304305
logger.debug("No data_devices, skipping DriveGroup: {}".format(

src/pybind/mgr/cephadm/tests/test_cephadm.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -166,9 +166,11 @@ def test_re_add_host_receive_loopback(self, resolve_ip, cephadm_module):
166166
resolve_ip.side_effect = ['192.168.122.1', '127.0.0.1', '127.0.0.1']
167167
assert wait(cephadm_module, cephadm_module.get_hosts()) == []
168168
cephadm_module._add_host(HostSpec('test', '192.168.122.1'))
169-
assert wait(cephadm_module, cephadm_module.get_hosts()) == [HostSpec('test', '192.168.122.1')]
169+
assert wait(cephadm_module, cephadm_module.get_hosts()) == [
170+
HostSpec('test', '192.168.122.1')]
170171
cephadm_module._add_host(HostSpec('test'))
171-
assert wait(cephadm_module, cephadm_module.get_hosts()) == [HostSpec('test', '192.168.122.1')]
172+
assert wait(cephadm_module, cephadm_module.get_hosts()) == [
173+
HostSpec('test', '192.168.122.1')]
172174
with pytest.raises(OrchestratorError):
173175
cephadm_module._add_host(HostSpec('test2'))
174176

@@ -894,7 +896,8 @@ def test_driveselection_to_ceph_volume(self, cephadm_module, devices, preview, e
894896
ds = DriveSelection(dg, Devices([Device(path) for path in devices]))
895897
preview = preview
896898
out = cephadm_module.osd_service.driveselection_to_ceph_volume(ds, [], preview)
897-
assert all(any(cmd in exp_cmd for exp_cmd in exp_commands) for cmd in out), f'Expected cmds from f{out} in {exp_commands}'
899+
assert all(any(cmd in exp_cmd for exp_cmd in exp_commands)
900+
for cmd in out), f'Expected cmds from f{out} in {exp_commands}'
898901

899902
@pytest.mark.parametrize(
900903
"devices, preview, exp_commands",
@@ -919,7 +922,8 @@ def test_raw_driveselection_to_ceph_volume(self, cephadm_module, devices, previe
919922
ds = DriveSelection(dg, Devices([Device(path) for path in devices]))
920923
preview = preview
921924
out = cephadm_module.osd_service.driveselection_to_ceph_volume(ds, [], preview)
922-
assert all(any(cmd in exp_cmd for exp_cmd in exp_commands) for cmd in out), f'Expected cmds from f{out} in {exp_commands}'
925+
assert all(any(cmd in exp_cmd for exp_cmd in exp_commands)
926+
for cmd in out), f'Expected cmds from f{out} in {exp_commands}'
923927

924928
@mock.patch("cephadm.serve.CephadmServe._run_cephadm", _run_cephadm(
925929
json.dumps([

0 commit comments

Comments
 (0)