Skip to content

Commit 3eb84b7

Browse files
authored
Merge pull request ceph#61685 from zdover23/wip-doc-2025-02-07-cephadm-services-osd
doc/cephadm: improve "Activate Existing OSDs" Reviewed-by: Anthony D'Atri <[email protected]>
2 parents 368e944 + 2de592e commit 3eb84b7

File tree

1 file changed

+92
-6
lines changed

1 file changed

+92
-6
lines changed

doc/cephadm/services/osd.rst

Lines changed: 92 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1057,15 +1057,101 @@ using the `paths` keyword with the following syntax:
10571057
Activate existing OSDs
10581058
======================
10591059

1060-
In case the OS of a host was reinstalled, existing OSDs need to be activated
1061-
again. For this use case, cephadm provides a wrapper for :ref:`ceph-volume-lvm-activate` that
1062-
activates all existing OSDs on a host.
1060+
If the operating system of a host has been reinstalled, the existing OSDs
1061+
associated with it must be activated again. ``cephadm`` provides a wrapper for
1062+
:ref:`ceph-volume-lvm-activate` that activates all existing OSDs on a host.
10631063

1064-
.. prompt:: bash #
1064+
The following procedure explains how to use ``cephadm`` to activate OSDs on a
1065+
host that has recently had its operating system reinstalled.
1066+
1067+
1068+
This procedure assumes the existence of two hosts: ``ceph01`` and ``ceph04``.
1069+
1070+
- ``ceph01`` is a host equipped with an admin keyring.
1071+
- ``ceph04`` is the host with the recently reinstalled operating system.
1072+
1073+
#. Install ``cephadm`` and ``podman`` on the host. The command for installing
1074+
these utilties will depend upon the operating system of the host.
1075+
1076+
#. Retrieve the public key.
1077+
1078+
.. prompt:: bash ceph01#
1079+
1080+
ceph cephadm get-pub-key > ceph.pub
1081+
1082+
#. Copy the key to the freshly reinstalled host:
1083+
1084+
.. prompt:: bash ceph04#
1085+
1086+
ssh-copy-id -f -i ceph.pub root@<hostname>
1087+
1088+
#. Retrieve the private key in order to test the connection:
1089+
1090+
.. prompt:: bash ceph01#
1091+
1092+
ceph config-key get mgr/cephadm/ssh_identity_key > ceph-private.key
1093+
1094+
#. From ``ceph01``, Modify the permissions of ``ceph-private.key``:
1095+
1096+
.. prompt:: bash ceph01#
1097+
1098+
chmod 400 ceph-private.key
1099+
1100+
#. Log in to ``ceph04`` from ``ceph01`` to test the connection and
1101+
configuration:
1102+
1103+
.. prompt:: bash ceph01#
1104+
1105+
ssh -i ceph-private.key ceph04
1106+
1107+
#. While logged into ``ceph01``, remove ``ceph.pub`` and ``ceph-private.key``:
1108+
1109+
.. prompt:: bash ceph01#
1110+
1111+
rm ceph.pub ceph-private.key
1112+
1113+
#. If you run your own container registry, instruct the orchestrator to log in
1114+
to each host in it:
1115+
1116+
.. prompt:: bash #
1117+
1118+
ceph cephadm registry-login my-registry.domain <user> <password>
1119+
1120+
When the orchestrator performs the registry login, it will attempt to deploy
1121+
any missing daemons to the host. This includes ``crash``, ``node-exporter``,
1122+
and any other daemons that the host ran before its operating system was
1123+
reinstalled.
1124+
1125+
To be a bit clearer, ``cephadm`` attempts to deploy missing daemons to all
1126+
hosts that have been put under management by cephadm when ``cephadm``
1127+
determines that the hosts are online. In this context, "online" means
1128+
"present in the output of the ``ceph orch host ls`` command and possessing a
1129+
status that is not ``offline`` or ``maintenance``. If it is necessary to log
1130+
in to the registry in order to pull the images for the missing daemons, then
1131+
the deployment of the missing daemons will fail until the process of logging
1132+
in to the registry has been completed.
1133+
1134+
.. note:: This step is not necessary if you do not run your own container
1135+
registry. If your host is still in the "host list", which can be
1136+
retrieved by running the command ``ceph orch host ls``, you do not
1137+
need to run this command.
1138+
1139+
#. Activate the OSDs on the host that has recently had its operating system
1140+
reinstalled:
1141+
1142+
.. prompt:: bash #
1143+
1144+
ceph cephadm osd activate <ceph04>
1145+
1146+
This command causes ``cephadm`` to scan all existing disks for OSDs. This
1147+
command will make ``cephadm`` deploy any missing daemons to the host
1148+
specified.
1149+
10651150

1066-
ceph cephadm osd activate <host>...
10671151

1068-
This will scan all existing disks for OSDs and deploy corresponding daemons.
1152+
*This procedure was developed by Eugen Block in Feburary of 2025, and a blog
1153+
post pertinent to its development can be seen here:*
1154+
`Eugen Block's "Cephadm: Activate existing OSDs" blog post <https://heiterbiswolkig.blogs.nde.ag/2025/02/06/cephadm-activate-existing-osds/>`_.
10691155

10701156
Further Reading
10711157
===============

0 commit comments

Comments
 (0)