Skip to content

Commit 5b8c155

Browse files
committed
cephadm: link new OSDs to existing managed services
Added logic for new OSDs to link to existing managed services. The create_osds function now dynamically assigns service_id based on matching entries in the spec_store. If no service name is provided, it creates the OSDs with 'osd.default' service name and they remain unmanaged. Signed-off-by: Kushal Deb <[email protected]>
1 parent abdefd2 commit 5b8c155

File tree

5 files changed

+43
-8
lines changed

5 files changed

+43
-8
lines changed

doc/cephadm/services/osd.rst

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -199,17 +199,19 @@ There are multiple ways to create new OSDs:
199199

200200
.. warning:: When deploying new OSDs with ``cephadm``, ensure that the ``ceph-osd`` package is not installed on the target host. If it is installed, conflicts may arise in the management and control of the OSD that may lead to errors or unexpected behavior.
201201

202-
* OSDs created via ``ceph orch daemon add`` are by default not added to the orchestrator's OSD service. To attach an OSD to a different, existing OSD service, issue a command of the following form:
202+
* New OSDs created using ``ceph orch daemon add osd`` are added under ``osd.default`` as managed OSDs with a valid spec.
203203

204-
.. prompt:: bash *
204+
To attach an existing OSD to a different managed service, ``ceph orch osd set-spec-affinity`` command can be used:
205205

206-
ceph orch osd set-spec-affinity <service_name> <osd_id(s)>
206+
.. prompt:: bash #
207+
208+
ceph orch osd set-spec-affinity <service_name> <osd_id(s)>
207209

208210
For example:
209211

210212
.. prompt:: bash #
211-
212-
ceph orch osd set-spec-affinity osd.default_drive_group 0 1
213+
214+
ceph orch osd set-spec-affinity osd.default_drive_group 0 1
213215

214216
Dry Run
215217
-------

qa/suites/orch/cephadm/smoke-small/start.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ tasks:
2020
- ceph orch host ls
2121
- ceph orch device ls
2222
- ceph orch ls --format yaml
23-
- ceph orch ls | grep '^osd '
23+
- ceph orch ls | grep 'osd'

qa/suites/orch/cephadm/smoke/start.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ tasks:
2525
- ceph orch host ls
2626
- ceph orch device ls
2727
- ceph orch ls --format yaml
28-
- ceph orch ls | grep '^osd '
28+
- ceph orch ls | grep 'osd'

qa/suites/orch/cephadm/upgrade/4-wait.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ tasks:
1313
- ceph health detail
1414
- ceph versions | jq -e '.overall | length == 1'
1515
- ceph versions | jq -e '.overall | keys' | grep $sha1
16-
- ceph orch ls | grep '^osd '
16+
- ceph orch ls | grep 'osd'

src/pybind/mgr/cephadm/module.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@
4242
MgmtGatewaySpec,
4343
NvmeofServiceSpec,
4444
)
45+
from ceph.deployment.drive_group import DeviceSelection
4546
from ceph.utils import str_to_datetime, datetime_to_str, datetime_now
4647
from cephadm.serve import CephadmServe
4748
from cephadm.services.cephadmservice import CephadmDaemonDeploySpec
@@ -2867,12 +2868,44 @@ def apply_drivegroups(self, specs: List[DriveGroupSpec]) -> List[str]:
28672868
"""
28682869
return [self._apply(spec) for spec in specs]
28692870

2871+
def create_osd_default_spec(self, drive_group: DriveGroupSpec) -> None:
2872+
# Create the default osd and attach a valid spec to it.
2873+
2874+
drive_group.unmanaged = False
2875+
2876+
host_pattern_obj = drive_group.placement.host_pattern
2877+
host = str(host_pattern_obj.pattern)
2878+
device_list = [d.path for d in drive_group.data_devices.paths] if drive_group.data_devices else []
2879+
devices = [{"path": d} for d in device_list]
2880+
2881+
osd_default_spec = DriveGroupSpec(
2882+
service_id="default",
2883+
placement=PlacementSpec(host_pattern=host),
2884+
data_devices=DeviceSelection(paths=devices),
2885+
unmanaged=False,
2886+
objectstore="bluestore"
2887+
)
2888+
2889+
self.log.info(f"Creating OSDs with service ID: {drive_group.service_id} on {host}:{device_list}")
2890+
self.spec_store.save(osd_default_spec)
2891+
self.apply([osd_default_spec])
2892+
28702893
@handle_orch_error
28712894
def create_osds(self, drive_group: DriveGroupSpec) -> str:
28722895
hosts: List[HostSpec] = self.inventory.all_specs()
28732896
filtered_hosts: List[str] = drive_group.placement.filter_matching_hostspecs(hosts)
28742897
if not filtered_hosts:
28752898
return "Invalid 'host:device' spec: host not found in cluster. Please check 'ceph orch host ls' for available hosts"
2899+
2900+
if not drive_group.service_id:
2901+
drive_group.service_id = "default"
2902+
2903+
if drive_group.service_id not in self.spec_store.all_specs:
2904+
self.log.info("osd.default does not exist. Creating it now.")
2905+
self.create_osd_default_spec(drive_group)
2906+
else:
2907+
self.log.info("osd.default already exists.")
2908+
28762909
return self.osd_service.create_from_spec(drive_group)
28772910

28782911
def _preview_osdspecs(self,

0 commit comments

Comments
 (0)